Sample records for particle simulation algorithms

  1. A Coulomb collision algorithm for weighted particle simulations

    NASA Technical Reports Server (NTRS)

    Miller, Ronald H.; Combi, Michael R.

    1994-01-01

    A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.

  2. Accelerated simulation of stochastic particle removal processes in particle-resolved aerosol models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, J.H.; Michelotti, M.D.; Riemer, N.

    2016-10-01

    Stochastic particle-resolved methods have proven useful for simulating multi-dimensional systems such as composition-resolved aerosol size distributions. While particle-resolved methods have substantial benefits for highly detailed simulations, these techniques suffer from high computational cost, motivating efforts to improve their algorithmic efficiency. Here we formulate an algorithm for accelerating particle removal processes by aggregating particles of similar size into bins. We present the Binned Algorithm for particle removal processes and analyze its performance with application to the atmospherically relevant process of aerosol dry deposition. We show that the Binned Algorithm can dramatically improve the efficiency of particle removals, particularly for low removalmore » rates, and that computational cost is reduced without introducing additional error. In simulations of aerosol particle removal by dry deposition in atmospherically relevant conditions, we demonstrate about 50-times increase in algorithm efficiency.« less

  3. Numerical investigation of field enhancement by metal nano-particles using a hybrid FDTD-PSTD algorithm.

    PubMed

    Pernice, W H; Payne, F P; Gallagher, D F

    2007-09-03

    We present a novel numerical scheme for the simulation of the field enhancement by metal nano-particles in the time domain. The algorithm is based on a combination of the finite-difference time-domain method and the pseudo-spectral time-domain method for dispersive materials. The hybrid solver leads to an efficient subgridding algorithm that does not suffer from spurious field spikes as do FDTD schemes. Simulation of the field enhancement by gold particles shows the expected exponential field profile. The enhancement factors are computed for single particles and particle arrays. Due to the geometry conforming mesh the algorithm is stable for long integration times and thus suitable for the simulation of resonance phenomena in coupled nano-particle structures.

  4. Explicit symplectic algorithms based on generating functions for relativistic charged particle dynamics in time-dependent electromagnetic field

    NASA Astrophysics Data System (ADS)

    Zhang, Ruili; Wang, Yulei; He, Yang; Xiao, Jianyuan; Liu, Jian; Qin, Hong; Tang, Yifa

    2018-02-01

    Relativistic dynamics of a charged particle in time-dependent electromagnetic fields has theoretical significance and a wide range of applications. The numerical simulation of relativistic dynamics is often multi-scale and requires accurate long-term numerical simulations. Therefore, explicit symplectic algorithms are much more preferable than non-symplectic methods and implicit symplectic algorithms. In this paper, we employ the proper time and express the Hamiltonian as the sum of exactly solvable terms and product-separable terms in space-time coordinates. Then, we give the explicit symplectic algorithms based on the generating functions of orders 2 and 3 for relativistic dynamics of a charged particle. The methodology is not new, which has been applied to non-relativistic dynamics of charged particles, but the algorithm for relativistic dynamics has much significance in practical simulations, such as the secular simulation of runaway electrons in tokamaks.

  5. Data parallel sorting for particle simulation

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1992-01-01

    Sorting on a parallel architecture is a communications intensive event which can incur a high penalty in applications where it is required. In the case of particle simulation, only integer sorting is necessary, and sequential implementations easily attain the minimum performance bound of O (N) for N particles. Parallel implementations, however, have to cope with the parallel sorting problem which, in addition to incurring a heavy communications cost, can make the minimun performance bound difficult to attain. This paper demonstrates how the sorting problem in a particle simulation can be reduced to a merging problem, and describes an efficient data parallel algorithm to solve this merging problem in a particle simulation. The new algorithm is shown to be optimal under conditions usual for particle simulation, and its fieldwise implementation on the Connection Machine is analyzed in detail. The new algorithm is about four times faster than a fieldwise implementation of radix sort on the Connection Machine.

  6. A fast sorting algorithm for a hypersonic rarefied flow particle simulation on the connection machine

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1989-01-01

    The data parallel implementation of a particle simulation for hypersonic rarefied flow described by Dagum associates a single parallel data element with each particle in the simulation. The simulated space is divided into discrete regions called cells containing a variable and constantly changing number of particles. The implementation requires a global sort of the parallel data elements so as to arrange them in an order that allows immediate access to the information associated with cells in the simulation. Described here is a very fast algorithm for performing the necessary ranking of the parallel data elements. The performance of the new algorithm is compared with that of the microcoded instruction for ranking on the Connection Machine.

  7. Loading relativistic Maxwell distributions in particle simulations

    NASA Astrophysics Data System (ADS)

    Zenitani, S.

    2015-12-01

    In order to study energetic plasma phenomena by using particle-in-cell (PIC) and Monte-Carlo simulations, we need to deal with relativistic velocity distributions in these simulations. However, numerical algorithms to deal with relativistic distributions are not well known. In this contribution, we overview basic algorithms to load relativistic Maxwell distributions in PIC and Monte-Carlo simulations. For stationary relativistic Maxwellian, the inverse transform method and the Sobol algorithm are reviewed. To boost particles to obtain relativistic shifted-Maxwellian, two rejection methods are newly proposed in a physically transparent manner. Their acceptance efficiencies are 􏰅50% for generic cases and 100% for symmetric distributions. They can be combined with arbitrary base algorithms.

  8. On the suitability of the connection machine for direct particle simulation

    NASA Technical Reports Server (NTRS)

    Dagum, Leonard

    1990-01-01

    The algorithmic structure was examined of the vectorizable Stanford particle simulation (SPS) method and the structure is reformulated in data parallel form. Some of the SPS algorithms can be directly translated to data parallel, but several of the vectorizable algorithms have no direct data parallel equivalent. This requires the development of new, strictly data parallel algorithms. In particular, a new sorting algorithm is developed to identify collision candidates in the simulation and a master/slave algorithm is developed to minimize communication cost in large table look up. Validation of the method is undertaken through test calculations for thermal relaxation of a gas, shock wave profiles, and shock reflection from a stationary wall. A qualitative measure is provided of the performance of the Connection Machine for direct particle simulation. The massively parallel architecture of the Connection Machine is found quite suitable for this type of calculation. However, there are difficulties in taking full advantage of this architecture because of lack of a broad based tradition of data parallel programming. An important outcome of this work has been new data parallel algorithms specifically of use for direct particle simulation but which also expand the data parallel diction.

  9. Loading relativistic Maxwell distributions in particle simulations

    NASA Astrophysics Data System (ADS)

    Zenitani, Seiji

    2015-04-01

    Numerical algorithms to load relativistic Maxwell distributions in particle-in-cell (PIC) and Monte-Carlo simulations are presented. For stationary relativistic Maxwellian, the inverse transform method and the Sobol algorithm are reviewed. To boost particles to obtain relativistic shifted-Maxwellian, two rejection methods are proposed in a physically transparent manner. Their acceptance efficiencies are ≈50 % for generic cases and 100% for symmetric distributions. They can be combined with arbitrary base algorithms.

  10. Incremental update of electrostatic interactions in adaptively restrained particle simulations.

    PubMed

    Edorh, Semeho Prince A; Redon, Stéphane

    2018-04-06

    The computation of long-range potentials is one of the demanding tasks in Molecular Dynamics. During the last decades, an inventive panoply of methods was developed to reduce the CPU time of this task. In this work, we propose a fast method dedicated to the computation of the electrostatic potential in adaptively restrained systems. We exploit the fact that, in such systems, only some particles are allowed to move at each timestep. We developed an incremental algorithm derived from a multigrid-based alternative to traditional Fourier-based methods. Our algorithm was implemented inside LAMMPS, a popular molecular dynamics simulation package. We evaluated the method on different systems. We showed that the new algorithm's computational complexity scales with the number of active particles in the simulated system, and is able to outperform the well-established Particle Particle Particle Mesh (P3M) for adaptively restrained simulations. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  11. Explicit symplectic algorithms based on generating functions for charged particle dynamics.

    PubMed

    Zhang, Ruili; Qin, Hong; Tang, Yifa; Liu, Jian; He, Yang; Xiao, Jianyuan

    2016-07-01

    Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H(x,p)=p_{i}f(x) or H(x,p)=x_{i}g(p). Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.

  12. Explicit symplectic algorithms based on generating functions for charged particle dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, Ruili; Qin, Hong; Tang, Yifa; Liu, Jian; He, Yang; Xiao, Jianyuan

    2016-07-01

    Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H (x ,p ) =pif (x ) or H (x ,p ) =xig (p ) . Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.

  13. Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.

    PubMed

    Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard

    2012-06-07

    We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems.

  14. Particle merging algorithm for PIC codes

    NASA Astrophysics Data System (ADS)

    Vranic, M.; Grismayer, T.; Martins, J. L.; Fonseca, R. A.; Silva, L. O.

    2015-06-01

    Particle-in-cell merging algorithms aim to resample dynamically the six-dimensional phase space occupied by particles without distorting substantially the physical description of the system. Whereas various approaches have been proposed in previous works, none of them seemed to be able to conserve fully charge, momentum, energy and their associated distributions. We describe here an alternative algorithm based on the coalescence of N massive or massless particles, considered to be close enough in phase space, into two new macro-particles. The local conservation of charge, momentum and energy are ensured by the resolution of a system of scalar equations. Various simulation comparisons have been carried out with and without the merging algorithm, from classical plasma physics problems to extreme scenarios where quantum electrodynamics is taken into account, showing in addition to the conservation of local quantities, the good reproducibility of the particle distributions. In case where the number of particles ought to increase exponentially in the simulation box, the dynamical merging permits a considerable speedup, and significant memory savings that otherwise would make the simulations impossible to perform.

  15. Stochastic weighted particle methods for population balance equations with coagulation, fragmentation and spatial inhomogeneity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kok Foong; Patterson, Robert I.A.; Wagner, Wolfgang

    2015-12-15

    Graphical abstract: -- Highlights: •Problems concerning multi-compartment population balance equations are studied. •A class of fragmentation weight transfer functions is presented. •Three stochastic weighted algorithms are compared against the direct simulation algorithm. •The numerical errors of the stochastic solutions are assessed as a function of fragmentation rate. •The algorithms are applied to a multi-dimensional granulation model. -- Abstract: This paper introduces stochastic weighted particle algorithms for the solution of multi-compartment population balance equations. In particular, it presents a class of fragmentation weight transfer functions which are constructed such that the number of computational particles stays constant during fragmentation events. Themore » weight transfer functions are constructed based on systems of weighted computational particles and each of it leads to a stochastic particle algorithm for the numerical treatment of population balance equations. Besides fragmentation, the algorithms also consider physical processes such as coagulation and the exchange of mass with the surroundings. The numerical properties of the algorithms are compared to the direct simulation algorithm and an existing method for the fragmentation of weighted particles. It is found that the new algorithms show better numerical performance over the two existing methods especially for systems with significant amount of large particles and high fragmentation rates.« less

  16. Optimal configuration of power grid sources based on optimal particle swarm algorithm

    NASA Astrophysics Data System (ADS)

    Wen, Yuanhua

    2018-04-01

    In order to optimize the distribution problem of power grid sources, an optimized particle swarm optimization algorithm is proposed. First, the concept of multi-objective optimization and the Pareto solution set are enumerated. Then, the performance of the classical genetic algorithm, the classical particle swarm optimization algorithm and the improved particle swarm optimization algorithm are analyzed. The three algorithms are simulated respectively. Compared with the test results of each algorithm, the superiority of the algorithm in convergence and optimization performance is proved, which lays the foundation for subsequent micro-grid power optimization configuration solution.

  17. Multi-Algorithm Particle Simulations with Spatiocyte.

    PubMed

    Arjunan, Satya N V; Takahashi, Koichi

    2017-01-01

    As quantitative biologists get more measurements of spatially regulated systems such as cell division and polarization, simulation of reaction and diffusion of proteins using the data is becoming increasingly relevant to uncover the mechanisms underlying the systems. Spatiocyte is a lattice-based stochastic particle simulator for biochemical reaction and diffusion processes. Simulations can be performed at single molecule and compartment spatial scales simultaneously. Molecules can diffuse and react in 1D (filament), 2D (membrane), and 3D (cytosol) compartments. The implications of crowded regions in the cell can be investigated because each diffusing molecule has spatial dimensions. Spatiocyte adopts multi-algorithm and multi-timescale frameworks to simulate models that simultaneously employ deterministic, stochastic, and particle reaction-diffusion algorithms. Comparison of light microscopy images to simulation snapshots is supported by Spatiocyte microscopy visualization and molecule tagging features. Spatiocyte is open-source software and is freely available at http://spatiocyte.org .

  18. A multilevel-skin neighbor list algorithm for molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Chenglong; Zhao, Mingcan; Hou, Chaofeng; Ge, Wei

    2018-01-01

    Searching of the interaction pairs and organization of the interaction processes are important steps in molecular dynamics (MD) algorithms and are critical to the overall efficiency of the simulation. Neighbor lists are widely used for these steps, where thicker skin can reduce the frequency of list updating but is discounted by more computation in distance check for the particle pairs. In this paper, we propose a new neighbor-list-based algorithm with a precisely designed multilevel skin which can reduce unnecessary computation on inter-particle distances. The performance advantages over traditional methods are then analyzed against the main simulation parameters on Intel CPUs and MICs (many integrated cores), and are clearly demonstrated. The algorithm can be generalized for various discrete simulations using neighbor lists.

  19. Noise effect in an improved conjugate gradient algorithm to invert particle size distribution and the algorithm amendment.

    PubMed

    Wei, Yongjie; Ge, Baozhen; Wei, Yaolin

    2009-03-20

    In general, model-independent algorithms are sensitive to noise during laser particle size measurement. An improved conjugate gradient algorithm (ICGA) that can be used to invert particle size distribution (PSD) from diffraction data is presented. By use of the ICGA to invert simulated data with multiplicative or additive noise, we determined that additive noise is the main factor that induces distorted results. Thus the ICGA is amended by introduction of an iteration step-adjusting parameter and is used experimentally on simulated data and some samples. The experimental results show that the sensitivity of the ICGA to noise is reduced and the inverted results are in accord with the real PSD.

  20. PENTACLE: Parallelized particle-particle particle-tree code for planet formation

    NASA Astrophysics Data System (ADS)

    Iwasawa, Masaki; Oshino, Shoichi; Fujii, Michiko S.; Hori, Yasunori

    2017-10-01

    We have newly developed a parallelized particle-particle particle-tree code for planet formation, PENTACLE, which is a parallelized hybrid N-body integrator executed on a CPU-based (super)computer. PENTACLE uses a fourth-order Hermite algorithm to calculate gravitational interactions between particles within a cut-off radius and a Barnes-Hut tree method for gravity from particles beyond. It also implements an open-source library designed for full automatic parallelization of particle simulations, FDPS (Framework for Developing Particle Simulator), to parallelize a Barnes-Hut tree algorithm for a memory-distributed supercomputer. These allow us to handle 1-10 million particles in a high-resolution N-body simulation on CPU clusters for collisional dynamics, including physical collisions in a planetesimal disc. In this paper, we show the performance and the accuracy of PENTACLE in terms of \\tilde{R}_cut and a time-step Δt. It turns out that the accuracy of a hybrid N-body simulation is controlled through Δ t / \\tilde{R}_cut and Δ t / \\tilde{R}_cut ˜ 0.1 is necessary to simulate accurately the accretion process of a planet for ≥106 yr. For all those interested in large-scale particle simulations, PENTACLE, customized for planet formation, will be freely available from https://github.com/PENTACLE-Team/PENTACLE under the MIT licence.

  1. Data decomposition of Monte Carlo particle transport simulations via tally servers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit

    An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithmmore » in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.« less

  2. Ef: Software for Nonrelativistic Beam Simulation by Particle-in-Cell Algorithm

    NASA Astrophysics Data System (ADS)

    Boytsov, A. Yu.; Bulychev, A. A.

    2018-04-01

    Understanding of particle dynamics is crucial in construction of electron guns, ion sources and other types of nonrelativistic beam devices. Apart from external guiding and focusing systems, a prominent role in evolution of such low-energy beams is played by particle-particle interaction. Numerical simulations taking into account these effects are typically accomplished by a well-known particle-in-cell method. In practice, for convenient work a simulation program should not only implement this method, but also support parallelization, provide integration with CAD systems and allow access to details of the simulation algorithm. To address the formulated requirements, development of a new open source code - Ef - has been started. It's current features and main functionality are presented. Comparison with several analytical models demonstrates good agreement between the numerical results and the theory. Further development plans are discussed.

  3. Automatic Clustering Using Multi-objective Particle Swarm and Simulated Annealing

    PubMed Central

    Abubaker, Ahmad; Baharum, Adam; Alrefaei, Mahmoud

    2015-01-01

    This paper puts forward a new automatic clustering algorithm based on Multi-Objective Particle Swarm Optimization and Simulated Annealing, “MOPSOSA”. The proposed algorithm is capable of automatic clustering which is appropriate for partitioning datasets to a suitable number of clusters. MOPSOSA combines the features of the multi-objective based particle swarm optimization (PSO) and the Multi-Objective Simulated Annealing (MOSA). Three cluster validity indices were optimized simultaneously to establish the suitable number of clusters and the appropriate clustering for a dataset. The first cluster validity index is centred on Euclidean distance, the second on the point symmetry distance, and the last cluster validity index is based on short distance. A number of algorithms have been compared with the MOPSOSA algorithm in resolving clustering problems by determining the actual number of clusters and optimal clustering. Computational experiments were carried out to study fourteen artificial and five real life datasets. PMID:26132309

  4. An evaluation of noise reduction algorithms for particle-based fluid simulations in multi-scale applications

    NASA Astrophysics Data System (ADS)

    Zimoń, M. J.; Prosser, R.; Emerson, D. R.; Borg, M. K.; Bray, D. J.; Grinberg, L.; Reese, J. M.

    2016-11-01

    Filtering of particle-based simulation data can lead to reduced computational costs and enable more efficient information transfer in multi-scale modelling. This paper compares the effectiveness of various signal processing methods to reduce numerical noise and capture the structures of nano-flow systems. In addition, a novel combination of these algorithms is introduced, showing the potential of hybrid strategies to improve further the de-noising performance for time-dependent measurements. The methods were tested on velocity and density fields, obtained from simulations performed with molecular dynamics and dissipative particle dynamics. Comparisons between the algorithms are given in terms of performance, quality of the results and sensitivity to the choice of input parameters. The results provide useful insights on strategies for the analysis of particle-based data and the reduction of computational costs in obtaining ensemble solutions.

  5. Computational plasticity algorithm for particle dynamics simulations

    NASA Astrophysics Data System (ADS)

    Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.

    2018-01-01

    The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.

  6. A highly scalable particle tracking algorithm using partitioned global address space (PGAS) programming for extreme-scale turbulence simulations

    NASA Astrophysics Data System (ADS)

    Buaria, D.; Yeung, P. K.

    2017-12-01

    A new parallel algorithm utilizing a partitioned global address space (PGAS) programming model to achieve high scalability is reported for particle tracking in direct numerical simulations of turbulent fluid flow. The work is motivated by the desire to obtain Lagrangian information necessary for the study of turbulent dispersion at the largest problem sizes feasible on current and next-generation multi-petaflop supercomputers. A large population of fluid particles is distributed among parallel processes dynamically, based on instantaneous particle positions such that all of the interpolation information needed for each particle is available either locally on its host process or neighboring processes holding adjacent sub-domains of the velocity field. With cubic splines as the preferred interpolation method, the new algorithm is designed to minimize the need for communication, by transferring between adjacent processes only those spline coefficients determined to be necessary for specific particles. This transfer is implemented very efficiently as a one-sided communication, using Co-Array Fortran (CAF) features which facilitate small data movements between different local partitions of a large global array. The cost of monitoring transfer of particle properties between adjacent processes for particles migrating across sub-domain boundaries is found to be small. Detailed benchmarks are obtained on the Cray petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign. For operations on the particles in a 81923 simulation (0.55 trillion grid points) on 262,144 Cray XE6 cores, the new algorithm is found to be orders of magnitude faster relative to a prior algorithm in which each particle is tracked by the same parallel process at all times. This large speedup reduces the additional cost of tracking of order 300 million particles to just over 50% of the cost of computing the Eulerian velocity field at this scale. Improving support of PGAS models on major compilers suggests that this algorithm will be of wider applicability on most upcoming supercomputers.

  7. Global linear gyrokinetic particle-in-cell simulations including electromagnetic effects in shaped plasmas

    NASA Astrophysics Data System (ADS)

    Mishchenko, A.; Borchardt, M.; Cole, M.; Hatzky, R.; Fehér, T.; Kleiber, R.; Könies, A.; Zocco, A.

    2015-05-01

    We give an overview of recent developments in electromagnetic simulations based on the gyrokinetic particle-in-cell codes GYGLES and EUTERPE. We present the gyrokinetic electromagnetic models implemented in the codes and discuss further improvements of the numerical algorithm, in particular the so-called pullback mitigation of the cancellation problem. The improved algorithm is employed to simulate linear electromagnetic instabilities in shaped tokamak and stellarator plasmas, which was previously impossible for the parameters considered.

  8. Multidimensional, fully implicit, exactly conserving electromagnetic particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Chacon, Luis

    2015-09-01

    We discuss a new, conservative, fully implicit 2D-3V particle-in-cell algorithm for non-radiative, electromagnetic kinetic plasma simulations, based on the Vlasov-Darwin model. Unlike earlier linearly implicit PIC schemes and standard explicit PIC schemes, fully implicit PIC algorithms are unconditionally stable and allow exact discrete energy and charge conservation. This has been demonstrated in 1D electrostatic and electromagnetic contexts. In this study, we build on these recent algorithms to develop an implicit, orbit-averaged, time-space-centered finite difference scheme for the Darwin field and particle orbit equations for multiple species in multiple dimensions. The Vlasov-Darwin model is very attractive for PIC simulations because it avoids radiative noise issues in non-radiative electromagnetic regimes. The algorithm conserves global energy, local charge, and particle canonical-momentum exactly, even with grid packing. The nonlinear iteration is effectively accelerated with a fluid preconditioner, which allows efficient use of large timesteps, O(√{mi/me}c/veT) larger than the explicit CFL. In this presentation, we will introduce the main algorithmic components of the approach, and demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 1D and 2D. Support from the LANL LDRD program and the DOE-SC ASCR office.

  9. An Integrated Centroid Finding and Particle Overlap Decomposition Algorithm for Stereo Imaging Velocimetry

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    An integrated algorithm for decomposing overlapping particle images (multi-particle objects) along with determining each object s constituent particle centroid(s) has been developed using image analysis techniques. The centroid finding algorithm uses a modified eight-direction search method for finding the perimeter of any enclosed object. The centroid is calculated using the intensity-weighted center of mass of the object. The overlap decomposition algorithm further analyzes the object data and breaks it down into its constituent particle centroid(s). This is accomplished with an artificial neural network, feature based technique and provides an efficient way of decomposing overlapping particles. Combining the centroid finding and overlap decomposition routines into a single algorithm allows us to accurately predict the error associated with finding the centroid(s) of particles in our experiments. This algorithm has been tested using real, simulated, and synthetic data and the results are presented and discussed.

  10. A solution algorithm for fluid-particle flows across all flow regimes

    NASA Astrophysics Data System (ADS)

    Kong, Bo; Fox, Rodney O.

    2017-09-01

    Many fluid-particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are close-packed as well as very dilute regions where particle-particle collisions are rare. Thus, in order to simulate such fluid-particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in the flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas-particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid-particle flows.

  11. An elementary singularity-free Rotational Brownian Dynamics algorithm for anisotropic particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilie, Ioana M.; Briels, Wim J.; MESA+ Institute for Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede

    2015-03-21

    Brownian Dynamics is the designated technique to simulate the collective dynamics of colloidal particles suspended in a solution, e.g., the self-assembly of patchy particles. Simulating the rotational dynamics of anisotropic particles by a first-order Langevin equation, however, gives rise to a number of complications, ranging from singularities when using a set of three rotational coordinates to subtle metric and drift corrections. Here, we derive and numerically validate a quaternion-based Rotational Brownian Dynamics algorithm that handles these complications in a simple and elegant way. The extension to hydrodynamic interactions is also discussed.

  12. A Variational Formulation of Macro-Particle Algorithms for Kinetic Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Shadwick, B. A.

    2013-10-01

    Macro-particle based simulations methods are in widespread use in plasma physics; their computational efficiency and intuitive nature are largely responsible for their longevity. In the main, these algorithms are formulated by approximating the continuous equations of motion. For systems governed by a variational principle (such as collisionless plasmas), approximations of the equations of motion is known to introduce anomalous behavior, especially in system invariants. We present a variational formulation of particle algorithms for plasma simulation based on a reduction of the distribution function onto a finite collection of macro-particles. As in the usual Particle-In-Cell (PIC) formulation, these macro-particles have a definite momentum and are spatially extended. The primary advantage of this approach is the preservation of the link between symmetries and conservation laws. For example, nothing in the reduction introduces explicit time dependence to the system and, therefore, the continuous-time equations of motion exactly conserve energy; thus, these models are free of grid-heating. In addition, the variational formulation allows for constructing models of arbitrary spatial and temporal order. In contrast, the overall accuracy of the usual PIC algorithm is at most second due to the nature of the force interpolation between the gridded field quantities and the (continuous) particle position. Again in contrast to the usual PIC algorithm, here the macro-particle shape is arbitrary; the spatial extent is completely decoupled from both the grid-size and the ``smoothness'' of the shape; smoother particle shapes are not necessarily larger. For simplicity, we restrict our discussion to one-dimensional, non-relativistic, un-magnetized, electrostatic plasmas. We comment on the extension to the electromagnetic case. Supported by the US DoE under contract numbers DE-FG02-08ER55000 and DE-SC0008382.

  13. Application of stochastic weighted algorithms to a multidimensional silica particle model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menz, William J.; Patterson, Robert I.A.; Wagner, Wolfgang

    2013-09-01

    Highlights: •Stochastic weighted algorithms (SWAs) are developed for a detailed silica model. •An implementation of SWAs with the transition kernel is presented. •The SWAs’ solutions converge to the direct simulation algorithm’s (DSA) solution. •The efficiency of SWAs is evaluated for this multidimensional particle model. •It is shown that SWAs can be used for coagulation problems in industrial systems. -- Abstract: This paper presents a detailed study of the numerical behaviour of stochastic weighted algorithms (SWAs) using the transition regime coagulation kernel and a multidimensional silica particle model. The implementation in the SWAs of the transition regime coagulation kernel and associatedmore » majorant rates is described. The silica particle model of Shekar et al. [S. Shekar, A.J. Smith, W.J. Menz, M. Sander, M. Kraft, A multidimensional population balance model to describe the aerosol synthesis of silica nanoparticles, Journal of Aerosol Science 44 (2012) 83–98] was used in conjunction with this coagulation kernel to study the convergence properties of SWAs with a multidimensional particle model. High precision solutions were calculated with two SWAs and also with the established direct simulation algorithm. These solutions, which were generated using large number of computational particles, showed close agreement. It was thus demonstrated that SWAs can be successfully used with complex coagulation kernels and high dimensional particle models to simulate real-world systems.« less

  14. Scalable Domain Decomposed Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    O'Brien, Matthew Joseph

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.

  15. On improving the algorithm efficiency in the particle-particle force calculations

    NASA Astrophysics Data System (ADS)

    Kozynchenko, Alexander I.; Kozynchenko, Sergey A.

    2016-09-01

    The problem of calculating inter-particle forces in the particle-particle (PP) simulation models takes an important place in scientific computing. Such simulation models are used in diverse scientific applications arising in astrophysics, plasma physics, particle accelerators, etc., where the long-range forces are considered. The inverse-square laws such as Coulomb's law of electrostatic forces and Newton's law of universal gravitation are the examples of laws pertaining to the long-range forces. The standard naïve PP method outlined, for example, by Hockney and Eastwood [1] is straightforward, processing all pairs of particles in a double nested loop. The PP algorithm provides the best accuracy of all possible methods, but its computational complexity is O (Np2), where Np is a total number of particles involved. Too low efficiency of the PP algorithm seems to be the challenging issue in some cases where the high accuracy is required. An example can be taken from the charged particle beam dynamics where, under computing the own space charge of the beam, so-called macro-particles are used (see e.g., Humphries Jr. [2], Kozynchenko and Svistunov [3]).

  16. Hybrid algorithms for fuzzy reverse supply chain network design.

    PubMed

    Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.

  17. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    NASA Astrophysics Data System (ADS)

    Thieberger, P.; Gassner, D.; Hulsart, R.; Michnoff, R.; Miller, T.; Minty, M.; Sorrell, Z.; Bartnik, A.

    2018-04-01

    A simple, analytically correct algorithm is developed for calculating "pencil" relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a Field Programmable Gate Array-based BPM readout implementation of the new algorithm has been developed and characterized. Finally, the algorithm is tested with BPM data from the Cornell Preinjector.

  18. Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design

    PubMed Central

    Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057

  19. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    DOE PAGES

    Thieberger, Peter; Gassner, D.; Hulsart, R.; ...

    2018-04-25

    Here, a simple, analytically correct algorithm is developed for calculating “pencil” relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a FPGA-based BPM readout implementation of the new algorithm has been developed and characterized. Lastly, the algorithm ismore » tested with BPM data from the Cornell Preinjector.« less

  20. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thieberger, Peter; Gassner, D.; Hulsart, R.

    Here, a simple, analytically correct algorithm is developed for calculating “pencil” relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a FPGA-based BPM readout implementation of the new algorithm has been developed and characterized. Lastly, the algorithm ismore » tested with BPM data from the Cornell Preinjector.« less

  1. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets.

    PubMed

    Thieberger, P; Gassner, D; Hulsart, R; Michnoff, R; Miller, T; Minty, M; Sorrell, Z; Bartnik, A

    2018-04-01

    A simple, analytically correct algorithm is developed for calculating "pencil" relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a Field Programmable Gate Array-based BPM readout implementation of the new algorithm has been developed and characterized. Finally, the algorithm is tested with BPM data from the Cornell Preinjector.

  2. A solution algorithm for fluid–particle flows across all flow regimes

    DOE PAGES

    Kong, Bo; Fox, Rodney O.

    2017-05-12

    Many fluid–particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are closepacked as well as very dilute regions where particle–particle collisions are rare. Thus, in order to simulate such fluid–particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in themore » flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas–particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid–particle flows.« less

  3. A solution algorithm for fluid–particle flows across all flow regimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kong, Bo; Fox, Rodney O.

    Many fluid–particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are closepacked as well as very dilute regions where particle–particle collisions are rare. Thus, in order to simulate such fluid–particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in themore » flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas–particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid–particle flows.« less

  4. Towards robust algorithms for current deposition and dynamic load-balancing in a GPU particle in cell code

    NASA Astrophysics Data System (ADS)

    Rossi, Francesco; Londrillo, Pasquale; Sgattoni, Andrea; Sinigardi, Stefano; Turchetti, Giorgio

    2012-12-01

    We present `jasmine', an implementation of a fully relativistic, 3D, electromagnetic Particle-In-Cell (PIC) code, capable of running simulations in various laser plasma acceleration regimes on Graphics-Processing-Units (GPUs) HPC clusters. Standard energy/charge preserving FDTD-based algorithms have been implemented using double precision and quadratic (or arbitrary sized) shape functions for the particle weighting. When porting a PIC scheme to the GPU architecture (or, in general, a shared memory environment), the particle-to-grid operations (e.g. the evaluation of the current density) require special care to avoid memory inconsistencies and conflicts. Here we present a robust implementation of this operation that is efficient for any number of particles per cell and particle shape function order. Our algorithm exploits the exposed GPU memory hierarchy and avoids the use of atomic operations, which can hurt performance especially when many particles lay on the same cell. We show the code multi-GPU scalability results and present a dynamic load-balancing algorithm. The code is written using a python-based C++ meta-programming technique which translates in a high level of modularity and allows for easy performance tuning and simple extension of the core algorithms to various simulation schemes.

  5. Approximate Algorithms for Computing Spatial Distance Histograms with Accuracy Guarantees

    PubMed Central

    Grupcev, Vladimir; Yuan, Yongke; Tu, Yi-Cheng; Huang, Jin; Chen, Shaoping; Pandit, Sagar; Weng, Michael

    2014-01-01

    Particle simulation has become an important research tool in many scientific and engineering fields. Data generated by such simulations impose great challenges to database storage and query processing. One of the queries against particle simulation data, the spatial distance histogram (SDH) query, is the building block of many high-level analytics, and requires quadratic time to compute using a straightforward algorithm. Previous work has developed efficient algorithms that compute exact SDHs. While beating the naive solution, such algorithms are still not practical in processing SDH queries against large-scale simulation data. In this paper, we take a different path to tackle this problem by focusing on approximate algorithms with provable error bounds. We first present a solution derived from the aforementioned exact SDH algorithm, and this solution has running time that is unrelated to the system size N. We also develop a mathematical model to analyze the mechanism that leads to errors in the basic approximate algorithm. Our model provides insights on how the algorithm can be improved to achieve higher accuracy and efficiency. Such insights give rise to a new approximate algorithm with improved time/accuracy tradeoff. Experimental results confirm our analysis. PMID:24693210

  6. Exploring the protein folding free energy landscape: coupling replica exchange method with P3ME/RESPA algorithm.

    PubMed

    Zhou, Ruhong

    2004-05-01

    A highly parallel replica exchange method (REM) that couples with a newly developed molecular dynamics algorithm particle-particle particle-mesh Ewald (P3ME)/RESPA has been proposed for efficient sampling of protein folding free energy landscape. The algorithm is then applied to two separate protein systems, beta-hairpin and a designed protein Trp-cage. The all-atom OPLSAA force field with an explicit solvent model is used for both protein folding simulations. Up to 64 replicas of solvated protein systems are simulated in parallel over a wide range of temperatures. The combined trajectories in temperature and configurational space allow a replica to overcome free energy barriers present at low temperatures. These large scale simulations reveal detailed results on folding mechanisms, intermediate state structures, thermodynamic properties and the temperature dependences for both protein systems.

  7. Scalable Domain Decomposed Monte Carlo Particle Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  8. PHoToNs–A parallel heterogeneous and threads oriented code for cosmological N-body simulation

    NASA Astrophysics Data System (ADS)

    Wang, Qiao; Cao, Zong-Yan; Gao, Liang; Chi, Xue-Bin; Meng, Chen; Wang, Jie; Wang, Long

    2018-06-01

    We introduce a new code for cosmological simulations, PHoToNs, which incorporates features for performing massive cosmological simulations on heterogeneous high performance computer (HPC) systems and threads oriented programming. PHoToNs adopts a hybrid scheme to compute gravitational force, with the conventional Particle-Mesh (PM) algorithm to compute the long-range force, the Tree algorithm to compute the short range force and the direct summation Particle-Particle (PP) algorithm to compute gravity from very close particles. A self-similar space filling a Peano-Hilbert curve is used to decompose the computing domain. Threads programming is advantageously used to more flexibly manage the domain communication, PM calculation and synchronization, as well as Dual Tree Traversal on the CPU+MIC platform. PHoToNs scales well and efficiency of the PP kernel achieves 68.6% of peak performance on MIC and 74.4% on CPU platforms. We also test the accuracy of the code against the much used Gadget-2 in the community and found excellent agreement.

  9. Event-chain Monte Carlo algorithms for three- and many-particle interactions

    NASA Astrophysics Data System (ADS)

    Harland, J.; Michel, M.; Kampmann, T. A.; Kierfeld, J.

    2017-02-01

    We generalize the rejection-free event-chain Monte Carlo algorithm from many-particle systems with pairwise interactions to systems with arbitrary three- or many-particle interactions. We introduce generalized lifting probabilities between particles and obtain a general set of equations for lifting probabilities, the solution of which guarantees maximal global balance. We validate the resulting three-particle event-chain Monte Carlo algorithms on three different systems by comparison with conventional local Monte Carlo simulations: i) a test system of three particles with a three-particle interaction that depends on the enclosed triangle area; ii) a hard-needle system in two dimensions, where needle interactions constitute three-particle interactions of the needle end points; iii) a semiflexible polymer chain with a bending energy, which constitutes a three-particle interaction of neighboring chain beads. The examples demonstrate that the generalization to many-particle interactions broadens the applicability of event-chain algorithms considerably.

  10. An exact and efficient first passage time algorithm for reaction-diffusion processes on a 2D-lattice

    NASA Astrophysics Data System (ADS)

    Bezzola, Andri; Bales, Benjamin B.; Alkire, Richard C.; Petzold, Linda R.

    2014-01-01

    We present an exact and efficient algorithm for reaction-diffusion-nucleation processes on a 2D-lattice. The algorithm makes use of first passage time (FPT) to replace the computationally intensive simulation of diffusion hops in KMC by larger jumps when particles are far away from step-edges or other particles. Our approach computes exact probability distributions of jump times and target locations in a closed-form formula, based on the eigenvectors and eigenvalues of the corresponding 1D transition matrix, maintaining atomic-scale resolution of resulting shapes of deposit islands. We have applied our method to three different test cases of electrodeposition: pure diffusional aggregation for large ranges of diffusivity rates and for simulation domain sizes of up to 4096×4096 sites, the effect of diffusivity on island shapes and sizes in combination with a KMC edge diffusion, and the calculation of an exclusion zone in front of a step-edge, confirming statistical equivalence to standard KMC simulations. The algorithm achieves significant speedup compared to standard KMC for cases where particles diffuse over long distances before nucleating with other particles or being captured by larger islands.

  11. An exact and efficient first passage time algorithm for reaction–diffusion processes on a 2D-lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bezzola, Andri, E-mail: andri.bezzola@gmail.com; Bales, Benjamin B., E-mail: bbbales2@gmail.com; Alkire, Richard C., E-mail: r-alkire@uiuc.edu

    2014-01-01

    We present an exact and efficient algorithm for reaction–diffusion–nucleation processes on a 2D-lattice. The algorithm makes use of first passage time (FPT) to replace the computationally intensive simulation of diffusion hops in KMC by larger jumps when particles are far away from step-edges or other particles. Our approach computes exact probability distributions of jump times and target locations in a closed-form formula, based on the eigenvectors and eigenvalues of the corresponding 1D transition matrix, maintaining atomic-scale resolution of resulting shapes of deposit islands. We have applied our method to three different test cases of electrodeposition: pure diffusional aggregation for largemore » ranges of diffusivity rates and for simulation domain sizes of up to 4096×4096 sites, the effect of diffusivity on island shapes and sizes in combination with a KMC edge diffusion, and the calculation of an exclusion zone in front of a step-edge, confirming statistical equivalence to standard KMC simulations. The algorithm achieves significant speedup compared to standard KMC for cases where particles diffuse over long distances before nucleating with other particles or being captured by larger islands.« less

  12. Physical Models for Particle Tracking Simulations in the RF Gap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shishlo, Andrei P.; Holmes, Jeffrey A.

    2015-06-01

    This document describes the algorithms that are used in the PyORBIT code to track the particles accelerated in the Radio-Frequency cavities. It gives the mathematical description of the algorithms and the assumptions made in each case. The derived formulas have been implemented in the PyORBIT code. The necessary data for each algorithm are described in detail.

  13. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    NASA Astrophysics Data System (ADS)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  14. Algorithmic structural segmentation of defective particle systems: a lithium-ion battery study.

    PubMed

    Westhoff, D; Finegan, D P; Shearing, P R; Schmidt, V

    2018-04-01

    We describe a segmentation algorithm that is able to identify defects (cracks, holes and breakages) in particle systems. This information is used to segment image data into individual particles, where each particle and its defects are identified accordingly. We apply the method to particle systems that appear in Li-ion battery electrodes. First, the algorithm is validated using simulated data from a stochastic 3D microstructure model, where we have full information about defects. This allows us to quantify the accuracy of the segmentation result. Then we show that the algorithm can successfully be applied to tomographic image data from real battery anodes and cathodes, which are composed of particle systems with very different morpohological properties. Finally, we show how the results of the segmentation algorithm can be used for structural analysis. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  15. Weighted Flow Algorithms (WFA) for stochastic particle coagulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeVille, R.E.L., E-mail: rdeville@illinois.edu; Riemer, N., E-mail: nriemer@illinois.edu; West, M., E-mail: mwest@illinois.edu

    2011-09-20

    Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangianmore » trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.« less

  16. Weighted Flow Algorithms (WFA) for stochastic particle coagulation

    NASA Astrophysics Data System (ADS)

    DeVille, R. E. L.; Riemer, N.; West, M.

    2011-09-01

    Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangian trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.

  17. Implicit Plasma Kinetic Simulation Using The Jacobian-Free Newton-Krylov Method

    NASA Astrophysics Data System (ADS)

    Taitano, William; Knoll, Dana; Chacon, Luis

    2009-11-01

    The use of fully implicit time integration methods in kinetic simulation is still area of algorithmic research. A brute-force approach to simultaneously including the field equations and the particle distribution function would result in an intractable linear algebra problem. A number of algorithms have been put forward which rely on an extrapolation in time. They can be thought of as linearly implicit methods or one-step Newton methods. However, issues related to time accuracy of these methods still remain. We are pursuing a route to implicit plasma kinetic simulation which eliminates extrapolation, eliminates phase-space from the linear algebra problem, and converges the entire nonlinear system within a time step. We accomplish all this using the Jacobian-Free Newton-Krylov algorithm. The original research along these lines considered particle methods to advance the distribution function [1]. In the current research we are advancing the Vlasov equations on a grid. Results will be presented which highlight algorithmic details for single species electrostatic problems and coupled ion-electron electrostatic problems. [4pt] [1] H. J. Kim, L. Chac'on, G. Lapenta, ``Fully implicit particle in cell algorithm,'' 47th Annual Meeting of the Division of Plasma Physics, Oct. 24-28, 2005, Denver, CO

  18. Conformal Electromagnetic Particle in Cell: A Review

    DOE PAGES

    Meierbachtol, Collin S.; Greenwood, Andrew D.; Verboncoeur, John P.; ...

    2015-10-26

    We review conformal (or body-fitted) electromagnetic particle-in-cell (EM-PIC) numerical solution schemes. Included is a chronological history of relevant particle physics algorithms often employed in these conformal simulations. We also provide brief mathematical descriptions of particle-tracking algorithms and current weighting schemes, along with a brief summary of major time-dependent electromagnetic solution methods. Several research areas are also highlighted for recommended future development of new conformal EM-PIC methods.

  19. Research on particle swarm optimization algorithm based on optimal movement probability

    NASA Astrophysics Data System (ADS)

    Ma, Jianhong; Zhang, Han; He, Baofeng

    2017-01-01

    The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.

  20. Multi-dimensional, fully implicit, exactly conserving electromagnetic particle-in-cell simulations in curvilinear geometry

    NASA Astrophysics Data System (ADS)

    Chen, Guangye; Chacon, Luis

    2015-11-01

    We discuss a new, conservative, fully implicit 2D3V Vlasov-Darwin particle-in-cell algorithm in curvilinear geometry for non-radiative, electromagnetic kinetic plasma simulations. Unlike standard explicit PIC schemes, fully implicit PIC algorithms are unconditionally stable and allow exact discrete energy and charge conservation. Here, we extend these algorithms to curvilinear geometry. The algorithm retains its exact conservation properties in curvilinear grids. The nonlinear iteration is effectively accelerated with a fluid preconditioner for weakly to modestly magnetized plasmas, which allows efficient use of large timesteps, O (√{mi/me}c/veT) larger than the explicit CFL. In this presentation, we will introduce the main algorithmic components of the approach, and demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 1D (slow shock) and 2D (island coalescense).

  1. Three-dimensional simulation and auto-stereoscopic 3D display of the battlefield environment based on the particle system algorithm

    NASA Astrophysics Data System (ADS)

    Ning, Jiwei; Sang, Xinzhu; Xing, Shujun; Cui, Huilong; Yan, Binbin; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    The army's combat training is very important now, and the simulation of the real battlefield environment is of great significance. Two-dimensional information has been unable to meet the demand at present. With the development of virtual reality technology, three-dimensional (3D) simulation of the battlefield environment is possible. In the simulation of 3D battlefield environment, in addition to the terrain, combat personnel and the combat tool ,the simulation of explosions, fire, smoke and other effects is also very important, since these effects can enhance senses of realism and immersion of the 3D scene. However, these special effects are irregular objects, which make it difficult to simulate with the general geometry. Therefore, the simulation of irregular objects is always a hot and difficult research topic in computer graphics. Here, the particle system algorithm is used for simulating irregular objects. We design the simulation of the explosion, fire, smoke based on the particle system and applied it to the battlefield 3D scene. Besides, the battlefield 3D scene simulation with the glasses-free 3D display is carried out with an algorithm based on GPU 4K super-multiview 3D video real-time transformation method. At the same time, with the human-computer interaction function, we ultimately realized glasses-free 3D display of the simulated more realistic and immersed 3D battlefield environment.

  2. DynamO: a free O(N) general event-driven molecular dynamics simulator.

    PubMed

    Bannerman, M N; Sargant, R; Lue, L

    2011-11-30

    Molecular dynamics algorithms for systems of particles interacting through discrete or "hard" potentials are fundamentally different to the methods for continuous or "soft" potential systems. Although many software packages have been developed for continuous potential systems, software for discrete potential systems based on event-driven algorithms are relatively scarce and specialized. We present DynamO, a general event-driven simulation package, which displays the optimal O(N) asymptotic scaling of the computational cost with the number of particles N, rather than the O(N) scaling found in most standard algorithms. DynamO provides reference implementations of the best available event-driven algorithms. These techniques allow the rapid simulation of both complex and large (>10(6) particles) systems for long times. The performance of the program is benchmarked for elastic hard sphere systems, homogeneous cooling and sheared inelastic hard spheres, and equilibrium Lennard-Jones fluids. This software and its documentation are distributed under the GNU General Public license and can be freely downloaded from http://marcusbannerman.co.uk/dynamo. Copyright © 2011 Wiley Periodicals, Inc.

  3. Wavelet Monte Carlo dynamics: A new algorithm for simulating the hydrodynamics of interacting Brownian particles

    NASA Astrophysics Data System (ADS)

    Dyer, Oliver T.; Ball, Robin C.

    2017-03-01

    We develop a new algorithm for the Brownian dynamics of soft matter systems that evolves time by spatially correlated Monte Carlo moves. The algorithm uses vector wavelets as its basic moves and produces hydrodynamics in the low Reynolds number regime propagated according to the Oseen tensor. When small moves are removed, the correlations closely approximate the Rotne-Prager tensor, itself widely used to correct for deficiencies in Oseen. We also include plane wave moves to provide the longest range correlations, which we detail for both infinite and periodic systems. The computational cost of the algorithm scales competitively with the number of particles simulated, N, scaling as N In N in homogeneous systems and as N in dilute systems. In comparisons to established lattice Boltzmann and Brownian dynamics algorithms, the wavelet method was found to be only a factor of order 1 times more expensive than the cheaper lattice Boltzmann algorithm in marginally semi-dilute simulations, while it is significantly faster than both algorithms at large N in dilute simulations. We also validate the algorithm by checking that it reproduces the correct dynamics and equilibrium properties of simple single polymer systems, as well as verifying the effect of periodicity on the mobility tensor.

  4. Study of Ion Beam Forming Process in Electric Thruster Using 3D FEM Simulation

    NASA Astrophysics Data System (ADS)

    Huang, Tao; Jin, Xiaolin; Hu, Quan; Li, Bin; Yang, Zhonghai

    2015-11-01

    There are two algorithms to simulate the process of ion beam forming in electric thruster. The one is electrostatic steady state algorithm. Firstly, an assumptive surface, which is enough far from the accelerator grids, launches the ion beam. Then the current density is calculated by theory formula. Secondly these particles are advanced one by one according to the equations of the motions of ions until they are out of the computational region. Thirdly, the electrostatic potential is recalculated and updated by solving Poisson Equation. At the end, the convergence is tested to determine whether the calculation should continue. The entire process will be repeated until the convergence is reached. Another one is time-depended PIC algorithm. In a global time step, we assumed that some new particles would be produced in the simulation domain and its distribution of position and velocity were certain. All of the particles that are still in the system will be advanced every local time steps. Typically, we set the local time step low enough so that the particle needs to be advanced about five times to move the distance of the edge of the element in which the particle is located.

  5. Metabolic flux estimation using particle swarm optimization with penalty function.

    PubMed

    Long, Hai-Xia; Xu, Wen-Bo; Sun, Jun

    2009-01-01

    Metabolic flux estimation through 13C trace experiment is crucial for quantifying the intracellular metabolic fluxes. In fact, it corresponds to a constrained optimization problem that minimizes a weighted distance between measured and simulated results. In this paper, we propose particle swarm optimization (PSO) with penalty function to solve 13C-based metabolic flux estimation problem. The stoichiometric constraints are transformed to an unconstrained one, by penalizing the constraints and building a single objective function, which in turn is minimized using PSO algorithm for flux quantification. The proposed algorithm is applied to estimate the central metabolic fluxes of Corynebacterium glutamicum. From simulation results, it is shown that the proposed algorithm has superior performance and fast convergence ability when compared to other existing algorithms.

  6. A multi-dimensional, energy- and charge-conserving, nonlinearly implicit, electromagnetic Vlasov–Darwin particle-in-cell algorithm

    DOE PAGES

    Chen, G.; Chacón, L.

    2015-08-11

    For decades, the Vlasov–Darwin model has been recognized to be attractive for particle-in-cell (PIC) kinetic plasma simulations in non-radiative electromagnetic regimes, to avoid radiative noise issues and gain computational efficiency. However, the Darwin model results in an elliptic set of field equations that renders conventional explicit time integration unconditionally unstable. We explore a fully implicit PIC algorithm for the Vlasov–Darwin model in multiple dimensions, which overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. The finite-difference scheme for Darwin field equations and particle equations of motion is space–time-centered, employing particle sub-cycling and orbit-averaging. This algorithm conserves total energy, local charge,more » canonical-momentum in the ignorable direction, and preserves the Coulomb gauge exactly. An asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. Finally, we demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 2D–3V.« less

  7. Numerical heating in Particle-In-Cell simulations with Monte Carlo binary collisions

    NASA Astrophysics Data System (ADS)

    Alves, E. Paulo; Mori, Warren; Fiuza, Frederico

    2017-10-01

    The binary Monte Carlo collision (BMCC) algorithm is a robust and popular method to include Coulomb collision effects in Particle-in-Cell (PIC) simulations of plasmas. While a number of works have focused on extending the validity of the model to different physical regimes of temperature and density, little attention has been given to the fundamental coupling between PIC and BMCC algorithms. Here, we show that the coupling between PIC and BMCC algorithms can give rise to (nonphysical) numerical heating of the system, that can be far greater than that observed when these algorithms operate independently. This deleterious numerical heating effect can significantly impact the evolution of the simulated system particularly for long simulation times. In this work, we describe the source of this numerical heating, and derive scaling laws for the numerical heating rates based on the numerical parameters of PIC-BMCC simulations. We compare our theoretical scalings with PIC-BMCC numerical experiments, and discuss strategies to minimize this parasitic effect. This work is supported by DOE FES under FWP 100237 and 100182.

  8. A hybrid artificial bee colony algorithm and pattern search method for inversion of particle size distribution from spectral extinction data

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Xing, Jian

    2017-10-01

    In this paper, a hybrid artificial bee colony (ABC) algorithm and pattern search (PS) method is proposed and applied for recovery of particle size distribution (PSD) from spectral extinction data. To be more useful and practical, size distribution function is modelled as the general Johnson's ? function that can overcome the difficulty of not knowing the exact type beforehand encountered in many real circumstances. The proposed hybrid algorithm is evaluated through simulated examples involving unimodal, bimodal and trimodal PSDs with different widths and mean particle diameters. For comparison, all examples are additionally validated by the single ABC algorithm. In addition, the performance of the proposed algorithm is further tested by actual extinction measurements with real standard polystyrene samples immersed in water. Simulation and experimental results illustrate that the hybrid algorithm can be used as an effective technique to retrieve the PSDs with high reliability and accuracy. Compared with the single ABC algorithm, our proposed algorithm can produce more accurate and robust inversion results while taking almost comparative CPU time over ABC algorithm alone. The superiority of ABC and PS hybridization strategy in terms of reaching a better balance of estimation accuracy and computation effort increases its potentials as an excellent inversion technique for reliable and efficient actual measurement of PSD.

  9. SIMULATION OF AEROSOL DYNAMICS: A COMPARATIVE REVIEW OF ALGORITHMS USED IN AIR QUALITY MODELS

    EPA Science Inventory

    A comparative review of algorithms currently used in air quality models to simulate aerosol dynamics is presented. This review addresses coagulation, condensational growth, nucleation, and gas/particle mass transfer. Two major approaches are used in air quality models to repres...

  10. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization.

    PubMed

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-03-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors' memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.

  11. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization

    PubMed Central

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-01-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors’ memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm. PMID:28257060

  12. Cosmological N-body Simulation

    NASA Astrophysics Data System (ADS)

    Lake, George

    1994-05-01

    .90ex> }}} The ``N'' in N-body calculations has doubled every year for the last two decades. To continue this trend, the UW N-body group is working on algorithms for the fast evaluation of gravitational forces on parallel computers and establishing rigorous standards for the computations. In these algorithms, the computational cost per time step is ~ 10(3) pairwise forces per particle. A new adaptive time integrator enables us to perform high quality integrations that are fully temporally and spatially adaptive. SPH--smoothed particle hydrodynamics will be added to simulate the effects of dissipating gas and magnetic fields. The importance of these calculations is two-fold. First, they determine the nonlinear consequences of theories for the structure of the Universe. Second, they are essential for the interpretation of observations. Every galaxy has six coordinates of velocity and position. Observations determine two sky coordinates and a line of sight velocity that bundles universal expansion (distance) together with a random velocity created by the mass distribution. Simulations are needed to determine the underlying structure and masses. The importance of simulations has moved from ex post facto explanation to an integral part of planning large observational programs. I will show why high quality simulations with ``large N'' are essential to accomplish our scientific goals. This year, our simulations have N >~ 10(7) . This is sufficient to tackle some niche problems, but well short of our 5 year goal--simulating The Sloan Digital Sky Survey using a few Billion particles (a Teraflop-year simulation). Extrapolating past trends, we would have to ``wait'' 7 years for this hundred-fold improvement. Like past gains, significant changes in the computational methods are required for these advances. I will describe new algorithms, algorithmic hacks and a dedicated computer to perform Billion particle simulations. Finally, I will describe research that can be enabled by Petaflop computers. This research is supported by the NASA HPCC/ESS program.

  13. Load management strategy for Particle-In-Cell simulations in high energy particle acceleration

    NASA Astrophysics Data System (ADS)

    Beck, A.; Frederiksen, J. T.; Dérouillat, J.

    2016-09-01

    In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.

  14. Scientific Visualization and Simulation for Multi-dimensional Marine Environment Data

    NASA Astrophysics Data System (ADS)

    Su, T.; Liu, H.; Wang, W.; Song, Z.; Jia, Z.

    2017-12-01

    As higher attention on the ocean and rapid development of marine detection, there are increasingly demands for realistic simulation and interactive visualization of marine environment in real time. Based on advanced technology such as GPU rendering, CUDA parallel computing and rapid grid oriented strategy, a series of efficient and high-quality visualization methods, which can deal with large-scale and multi-dimensional marine data in different environmental circumstances, has been proposed in this paper. Firstly, a high-quality seawater simulation is realized by FFT algorithm, bump mapping and texture animation technology. Secondly, large-scale multi-dimensional marine hydrological environmental data is virtualized by 3d interactive technologies and volume rendering techniques. Thirdly, seabed terrain data is simulated with improved Delaunay algorithm, surface reconstruction algorithm, dynamic LOD algorithm and GPU programming techniques. Fourthly, seamless modelling in real time for both ocean and land based on digital globe is achieved by the WebGL technique to meet the requirement of web-based application. The experiments suggest that these methods can not only have a satisfying marine environment simulation effect, but also meet the rendering requirements of global multi-dimension marine data. Additionally, a simulation system for underwater oil spill is established by OSG 3D-rendering engine. It is integrated with the marine visualization method mentioned above, which shows movement processes, physical parameters, current velocity and direction for different types of deep water oil spill particle (oil spill particles, hydrates particles, gas particles, etc.) dynamically and simultaneously in multi-dimension. With such application, valuable reference and decision-making information can be provided for understanding the progress of oil spill in deep water, which is helpful for ocean disaster forecasting, warning and emergency response.

  15. Voidage correction algorithm for unresolved Euler-Lagrange simulations

    NASA Astrophysics Data System (ADS)

    Askarishahi, Maryam; Salehi, Mohammad-Sadegh; Radl, Stefan

    2018-04-01

    The effect of grid coarsening on the predicted total drag force and heat exchange rate in dense gas-particle flows is investigated using Euler-Lagrange (EL) approach. We demonstrate that grid coarsening may reduce the predicted total drag force and exchange rate. Surprisingly, exchange coefficients predicted by the EL approach deviate more significantly from the exact value compared to results of Euler-Euler (EE)-based calculations. The voidage gradient is identified as the root cause of this peculiar behavior. Consequently, we propose a correction algorithm based on a sigmoidal function to predict the voidage experienced by individual particles. Our correction algorithm can significantly improve the prediction of exchange coefficients in EL models, which is tested for simulations involving Euler grid cell sizes between 2d_p and 12d_p . It is most relevant in simulations of dense polydisperse particle suspensions featuring steep voidage profiles. For these suspensions, classical approaches may result in an error of the total exchange rate of up to 30%.

  16. Employing multi-GPU power for molecular dynamics simulation: an extension of GALAMOST

    NASA Astrophysics Data System (ADS)

    Zhu, You-Liang; Pan, Deng; Li, Zhan-Wei; Liu, Hong; Qian, Hu-Jun; Zhao, Yang; Lu, Zhong-Yuan; Sun, Zhao-Yan

    2018-04-01

    We describe the algorithm of employing multi-GPU power on the basis of Message Passing Interface (MPI) domain decomposition in a molecular dynamics code, GALAMOST, which is designed for the coarse-grained simulation of soft matters. The code of multi-GPU version is developed based on our previous single-GPU version. In multi-GPU runs, one GPU takes charge of one domain and runs single-GPU code path. The communication between neighbouring domains takes a similar algorithm of CPU-based code of LAMMPS, but is optimised specifically for GPUs. We employ a memory-saving design which can enlarge maximum system size at the same device condition. An optimisation algorithm is employed to prolong the update period of neighbour list. We demonstrate good performance of multi-GPU runs on the simulation of Lennard-Jones liquid, dissipative particle dynamics liquid, polymer and nanoparticle composite, and two-patch particles on workstation. A good scaling of many nodes on cluster for two-patch particles is presented.

  17. A retrieval algorithm of hydrometer profile for submillimeter-wave radiometer

    NASA Astrophysics Data System (ADS)

    Liu, Yuli; Buehler, Stefan; Liu, Heguang

    2017-04-01

    Vertical profiles of particle microphysics perform vital functions for the estimation of climatic feedback. This paper proposes a new algorithm to retrieve the profile of the parameters of the hydrometeor(i.e., ice, snow, rain, liquid cloud, graupel) based on passive submillimeter-wave measurements. These parameters include water content and particle size. The first part of the algorithm builds the database and retrieves the integrated quantities. Database is built up by Atmospheric Radiative Transfer Simulator(ARTS), which uses atmosphere data to simulate the corresponding brightness temperature. Neural network, trained by the precalculated database, is developed to retrieve the water path for each type of particles. The second part of the algorithm analyses the statistical relationship between water path and vertical parameters profiles. Based on the strong dependence existing between vertical layers in the profiles, Principal Component Analysis(PCA) technique is applied. The third part of the algorithm uses the forward model explicitly to retrieve the hydrometeor profiles. Cost function is calculated in each iteration, and Differential Evolution(DE) algorithm is used to adjust the parameter values during the evolutionary process. The performance of this algorithm is planning to be verified for both simulation database and measurement data, by retrieving profiles in comparison with the initial one. Results show that this algorithm has the ability to retrieve the hydrometeor profiles efficiently. The combination of ARTS and optimization algorithm can get much better results than the commonly used database approach. Meanwhile, the concept that ARTS can be used explicitly in the retrieval process shows great potential in providing solution to other retrieval problems.

  18. Modeling and simulation of the debonding process of composite solid propellants

    NASA Astrophysics Data System (ADS)

    Feng, Tao; Xu, Jin-sheng; Han, Long; Chen, Xiong

    2017-07-01

    In order to study the damage evolution law of composite solid propellants, the molecular dynamics particle filled algorithm was used to establish the mesoscopic structure model of HTPB(Hydroxyl-terminated polybutadiene) propellants. The cohesive element method was employed for the adhesion interface between AP(Ammonium perchlorate) particle and HTPB matrix and the bilinear cohesive zone model was used to describe the mechanical response of the interface elements. The inversion analysis method based on Hooke-Jeeves optimization algorithm was employed to identify the parameters of cohesive zone model(CZM) of the particle/binder interface. Then, the optimized parameters were applied to the commercial finite element software ABAQUS to simulate the damage evolution process for AP particle and HTPB matrix, including the initiation, development, gathering and macroscopic crack. Finally, the stress-strain simulation curve was compared with the experiment curves. The result shows that the bilinear cohesive zone model can accurately describe the debonding and fracture process between the AP particles and HTPB matrix under the uniaxial tension loading.

  19. Multi-resolution MPS method

    NASA Astrophysics Data System (ADS)

    Tanaka, Masayuki; Cardoso, Rui; Bahai, Hamid

    2018-04-01

    In this work, the Moving Particle Semi-implicit (MPS) method is enhanced for multi-resolution problems with different resolutions at different parts of the domain utilising a particle splitting algorithm for the finer resolution and a particle merging algorithm for the coarser resolution. The Least Square MPS (LSMPS) method is used for higher stability and accuracy. Novel boundary conditions are developed for the treatment of wall and pressure boundaries for the Multi-Resolution LSMPS method. A wall is represented by polygons for effective simulations of fluid flows with complex wall geometries and the pressure boundary condition allows arbitrary inflow and outflow, making the method easier to be used in flow simulations of channel flows. By conducting simulations of channel flows and free surface flows, the accuracy of the proposed method was verified.

  20. Accelerating a Particle-in-Cell Simulation Using a Hybrid Counting Sort

    NASA Astrophysics Data System (ADS)

    Bowers, K. J.

    2001-11-01

    In this article, performance limitations of the particle advance in a particle-in-cell (PIC) simulation are discussed. It is shown that the memory subsystem and cache-thrashing severely limit the speed of such simulations. Methods to implement a PIC simulation under such conditions are explored. An algorithm based on a counting sort is developed which effectively eliminates PIC simulation cache thrashing. Sustained performance gains of 40 to 70 percent are measured on commodity workstations for a minimal 2d2v electrostatic PIC simulation. More complete simulations are expected to have even better results as larger simulations are usually even more memory subsystem limited.

  1. A parallel electrostatic Particle-in-Cell method on unstructured tetrahedral grids for large-scale bounded collisionless plasma simulations

    NASA Astrophysics Data System (ADS)

    Averkin, Sergey N.; Gatsonis, Nikolaos A.

    2018-06-01

    An unstructured electrostatic Particle-In-Cell (EUPIC) method is developed on arbitrary tetrahedral grids for simulation of plasmas bounded by arbitrary geometries. The electric potential in EUPIC is obtained on cell vertices from a finite volume Multi-Point Flux Approximation of Gauss' law using the indirect dual cell with Dirichlet, Neumann and external circuit boundary conditions. The resulting matrix equation for the nodal potential is solved with a restarted generalized minimal residual method (GMRES) and an ILU(0) preconditioner algorithm, parallelized using a combination of node coloring and level scheduling approaches. The electric field on vertices is obtained using the gradient theorem applied to the indirect dual cell. The algorithms for injection, particle loading, particle motion, and particle tracking are parallelized for unstructured tetrahedral grids. The algorithms for the potential solver, electric field evaluation, loading, scatter-gather algorithms are verified using analytic solutions for test cases subject to Laplace and Poisson equations. Grid sensitivity analysis examines the L2 and L∞ norms of the relative error in potential, field, and charge density as a function of edge-averaged and volume-averaged cell size. Analysis shows second order of convergence for the potential and first order of convergence for the electric field and charge density. Temporal sensitivity analysis is performed and the momentum and energy conservation properties of the particle integrators in EUPIC are examined. The effects of cell size and timestep on heating, slowing-down and the deflection times are quantified. The heating, slowing-down and the deflection times are found to be almost linearly dependent on number of particles per cell. EUPIC simulations of current collection by cylindrical Langmuir probes in collisionless plasmas show good comparison with previous experimentally validated numerical results. These simulations were also used in a parallelization efficiency investigation. Results show that the EUPIC has efficiency of more than 80% when the simulation is performed on a single CPU from a non-uniform memory access node and the efficiency is decreasing as the number of threads further increases. The EUPIC is applied to the simulation of the multi-species plasma flow over a geometrically complex CubeSat in Low Earth Orbit. The EUPIC potential and flowfield distribution around the CubeSat exhibit features that are consistent with previous simulations over simpler geometrical bodies.

  2. Simulation and performance of an artificial retina for 40 MHz track reconstruction

    DOE PAGES

    Abba, A.; Bedeschi, F.; Citterio, M.; ...

    2015-03-05

    We present the results of a detailed simulation of the artificial retina pattern-recognition algorithm, designed to reconstruct events with hundreds of charged-particle tracks in pixel and silicon detectors at LHCb with LHC crossing frequency of 40 MHz. Performances of the artificial retina algorithm are assessed using the official Monte Carlo samples of the LHCb experiment. We found performances for the retina pattern-recognition algorithm comparable with the full LHCb reconstruction algorithm.

  3. The Splashback Radius of Halos from Particle Dynamics. I. The SPARTA Algorithm

    NASA Astrophysics Data System (ADS)

    Diemer, Benedikt

    2017-07-01

    Motivated by the recent proposal of the splashback radius as a physical boundary of dark-matter halos, we present a parallel computer code for Subhalo and PARticle Trajectory Analysis (SPARTA). The code analyzes the orbits of all simulation particles in all host halos, billions of orbits in the case of typical cosmological N-body simulations. Within this general framework, we develop an algorithm that accurately extracts the location of the first apocenter of particles after infall into a halo, or splashback. We define the splashback radius of a halo as the smoothed average of the apocenter radii of individual particles. This definition allows us to reliably measure the splashback radii of 95% of host halos above a resolution limit of 1000 particles. We show that, on average, the splashback radius and mass are converged to better than 5% accuracy with respect to mass resolution, snapshot spacing, and all free parameters of the method.

  4. An Improved Co-evolutionary Particle Swarm Optimization for Wireless Sensor Networks with Dynamic Deployment

    PubMed Central

    Wang, Xue; Wang, Sheng; Ma, Jun-Jie

    2007-01-01

    The effectiveness of wireless sensor networks (WSNs) depends on the coverage and target detection probability provided by dynamic deployment, which is usually supported by the virtual force (VF) algorithm. However, in the VF algorithm, the virtual force exerted by stationary sensor nodes will hinder the movement of mobile sensor nodes. Particle swarm optimization (PSO) is introduced as another dynamic deployment algorithm, but in this case the computation time required is the big bottleneck. This paper proposes a dynamic deployment algorithm which is named “virtual force directed co-evolutionary particle swarm optimization” (VFCPSO), since this algorithm combines the co-evolutionary particle swarm optimization (CPSO) with the VF algorithm, whereby the CPSO uses multiple swarms to optimize different components of the solution vectors for dynamic deployment cooperatively and the velocity of each particle is updated according to not only the historical local and global optimal solutions, but also the virtual forces of sensor nodes. Simulation results demonstrate that the proposed VFCPSO is competent for dynamic deployment in WSNs and has better performance with respect to computation time and effectiveness than the VF, PSO and VFPSO algorithms.

  5. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less

  6. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE PAGES

    Romano, Paul K.; Siegel, Andrew R.

    2017-07-01

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less

  7. Apparatus and method for tracking a molecule or particle in three dimensions

    DOEpatents

    Werner, James H [Los Alamos, NM; Goodwin, Peter M [Los Alamos, NM; Lessard, Guillaume [Santa Fe, NM

    2009-03-03

    An apparatus and method were used to track the movement of fluorescent particles in three dimensions. Control software was used with the apparatus to implement a tracking algorithm for tracking the motion of the individual particles in glycerol/water mixtures. Monte Carlo simulations suggest that the tracking algorithms in combination with the apparatus may be used for tracking the motion of single fluorescent or fluorescently labeled biomolecules in three dimensions.

  8. A flexible algorithm for calculating pair interactions on SIMD architectures

    NASA Astrophysics Data System (ADS)

    Páll, Szilárd; Hess, Berk

    2013-12-01

    Calculating interactions or correlations between pairs of particles is typically the most time-consuming task in particle simulation or correlation analysis. Straightforward implementations using a double loop over particle pairs have traditionally worked well, especially since compilers usually do a good job of unrolling the inner loop. In order to reach high performance on modern CPU and accelerator architectures, single-instruction multiple-data (SIMD) parallelization has become essential. Avoiding memory bottlenecks is also increasingly important and requires reducing the ratio of memory to arithmetic operations. Moreover, when pairs only interact within a certain cut-off distance, good SIMD utilization can only be achieved by reordering input and output data, which quickly becomes a limiting factor. Here we present an algorithm for SIMD parallelization based on grouping a fixed number of particles, e.g. 2, 4, or 8, into spatial clusters. Calculating all interactions between particles in a pair of such clusters improves data reuse compared to the traditional scheme and results in a more efficient SIMD parallelization. Adjusting the cluster size allows the algorithm to map to SIMD units of various widths. This flexibility not only enables fast and efficient implementation on current CPUs and accelerator architectures like GPUs or Intel MIC, but it also makes the algorithm future-proof. We present the algorithm with an application to molecular dynamics simulations, where we can also make use of the effective buffering the method introduces.

  9. High-Performance Reactive Particle Tracking with Adaptive Representation

    NASA Astrophysics Data System (ADS)

    Schmidt, M.; Benson, D. A.; Pankavich, S.

    2017-12-01

    Lagrangian particle tracking algorithms have been shown to be effective tools for modeling chemical reactions in imperfectly-mixed media. One disadvantage of these algorithms is the possible need to employ large numbers of particles in simulations, depending on the concentration covariance structure, and these large particle numbers can lead to long computation times. Two distinct approaches have recently arisen to overcome this. One method employs spatial kernels that are related to a specified, reduced particle number; however, over-wide kernels, dictated by a very low particle number, lead to an excess of reaction calculations and cause a reduction in performance. Another formulation involves hybrid particles that carry multiple species of reactant, wherein each particle is treated as its own well-mixed volume, obviating the need for large numbers of particles for each species but still requiring a fixed number of hybrid particles. Here, we combine these two approaches and demonstrate an improved method for simulating a given system in a computationally efficient manner. Additionally, the independent nature of transport and reaction calculations in this approach allows for significant gains via parallelization in an MPI or OpenMP context. For benchmarking, we choose a CO2 injection simulation with dissolution and precipitation of calcite and dolomite, allowing us to derive the proper treatment of interaction between solid and aqueous phases.

  10. Proceedings of the 14th International Conference on the Numerical Simulation of Plasmas

    NASA Astrophysics Data System (ADS)

    Partial Contents are as follows: Numerical Simulations of the Vlasov-Maxwell Equations by Coupled Particle-Finite Element Methods on Unstructured Meshes; Electromagnetic PIC Simulations Using Finite Elements on Unstructured Grids; Modelling Travelling Wave Output Structures with the Particle-in-Cell Code CONDOR; SST--A Single-Slice Particle Simulation Code; Graphical Display and Animation of Data Produced by Electromagnetic, Particle-in-Cell Codes; A Post-Processor for the PEST Code; Gray Scale Rendering of Beam Profile Data; A 2D Electromagnetic PIC Code for Distributed Memory Parallel Computers; 3-D Electromagnetic PIC Simulation on the NRL Connection Machine; Plasma PIC Simulations on MIMD Computers; Vlasov-Maxwell Algorithm for Electromagnetic Plasma Simulation on Distributed Architectures; MHD Boundary Layer Calculation Using the Vortex Method; and Eulerian Codes for Plasma Simulations.

  11. Efficient kinetic Monte Carlo method for reaction-diffusion problems with spatially varying annihilation rates

    NASA Astrophysics Data System (ADS)

    Schwarz, Karsten; Rieger, Heiko

    2013-03-01

    We present an efficient Monte Carlo method to simulate reaction-diffusion processes with spatially varying particle annihilation or transformation rates as it occurs for instance in the context of motor-driven intracellular transport. Like Green's function reaction dynamics and first-passage time methods, our algorithm avoids small diffusive hops by propagating sufficiently distant particles in large hops to the boundaries of protective domains. Since for spatially varying annihilation or transformation rates the single particle diffusion propagator is not known analytically, we present an algorithm that generates efficiently either particle displacements or annihilations with the correct statistics, as we prove rigorously. The numerical efficiency of the algorithm is demonstrated with an illustrative example.

  12. A smooth particle-mesh Ewald algorithm for Stokes suspension simulations: The sedimentation of fibers

    NASA Astrophysics Data System (ADS)

    Saintillan, David; Darve, Eric; Shaqfeh, Eric S. G.

    2005-03-01

    Large-scale simulations of non-Brownian rigid fibers sedimenting under gravity at zero Reynolds number have been performed using a fast algorithm. The mathematical formulation follows the previous simulations by Butler and Shaqfeh ["Dynamic simulations of the inhomogeneous sedimentation of rigid fibres," J. Fluid Mech. 468, 205 (2002)]. The motion of the fibers is described using slender-body theory, and the line distribution of point forces along their lengths is approximated by a Legendre polynomial in which only the total force, torque, and particle stresslet are retained. Periodic boundary conditions are used to simulate an infinite suspension, and both far-field hydrodynamic interactions and short-range lubrication forces are considered in all simulations. The calculation of the hydrodynamic interactions, which is typically the bottleneck for large systems with periodic boundary conditions, is accelerated using a smooth particle-mesh Ewald (SPME) algorithm previously used in molecular dynamics simulations. In SPME the slowly decaying Green's function is split into two fast-converging sums: the first involves the distribution of point forces and accounts for the singular short-range part of the interactions, while the second is expressed in terms of the Fourier transform of the force distribution and accounts for the smooth and long-range part. Because of its smoothness, the second sum can be computed efficiently on an underlying grid using the fast Fourier transform algorithm, resulting in a significant speed-up of the calculations. Systems of up to 512 fibers were simulated on a single-processor workstation, providing a different insight into the formation, structure, and dynamics of the inhomogeneities that occur in sedimenting fiber suspensions.

  13. High speed stereovision setup for position and motion estimation of fertilizer particles leaving a centrifugal spreader.

    PubMed

    Hijazi, Bilal; Cool, Simon; Vangeyte, Jürgen; Mertens, Koen C; Cointault, Frédéric; Paindavoine, Michel; Pieters, Jan G

    2014-11-13

    A 3D imaging technique using a high speed binocular stereovision system was developed in combination with corresponding image processing algorithms for accurate determination of the parameters of particles leaving the spinning disks of centrifugal fertilizer spreaders. Validation of the stereo-matching algorithm using a virtual 3D stereovision simulator indicated an error of less than 2 pixels for 90% of the particles. The setup was validated using the cylindrical spread pattern of an experimental spreader. A 2D correlation coefficient of 90% and a Relative Error of 27% was found between the experimental results and the (simulated) spread pattern obtained with the developed setup. In combination with a ballistic flight model, the developed image acquisition and processing algorithms can enable fast determination and evaluation of the spread pattern which can be used as a tool for spreader design and precise machine calibration.

  14. Quantum Metropolis sampling.

    PubMed

    Temme, K; Osborne, T J; Vollbrecht, K G; Poulin, D; Verstraete, F

    2011-03-03

    The original motivation to build a quantum computer came from Feynman, who imagined a machine capable of simulating generic quantum mechanical systems--a task that is believed to be intractable for classical computers. Such a machine could have far-reaching applications in the simulation of many-body quantum physics in condensed-matter, chemical and high-energy systems. Part of Feynman's challenge was met by Lloyd, who showed how to approximately decompose the time evolution operator of interacting quantum particles into a short sequence of elementary gates, suitable for operation on a quantum computer. However, this left open the problem of how to simulate the equilibrium and static properties of quantum systems. This requires the preparation of ground and Gibbs states on a quantum computer. For classical systems, this problem is solved by the ubiquitous Metropolis algorithm, a method that has basically acquired a monopoly on the simulation of interacting particles. Here we demonstrate how to implement a quantum version of the Metropolis algorithm. This algorithm permits sampling directly from the eigenstates of the Hamiltonian, and thus evades the sign problem present in classical simulations. A small-scale implementation of this algorithm should be achievable with today's technology.

  15. Application of particle swarm optimization in path planning of mobile robot

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Cai, Feng; Wang, Ying

    2017-08-01

    In order to realize the optimal path planning of mobile robot in unknown environment, a particle swarm optimization algorithm based on path length as fitness function is proposed. The location of the global optimal particle is determined by the minimum fitness value, and the robot moves along the points of the optimal particles to the target position. The process of moving to the target point is done with MATLAB R2014a. Compared with the standard particle swarm optimization algorithm, the simulation results show that this method can effectively avoid all obstacles and get the optimal path.

  16. LAMMPS integrated materials engine (LIME) for efficient automation of particle-based simulations: application to equation of state generation

    NASA Astrophysics Data System (ADS)

    Barnes, Brian C.; Leiter, Kenneth W.; Becker, Richard; Knap, Jaroslaw; Brennan, John K.

    2017-07-01

    We describe the development, accuracy, and efficiency of an automation package for molecular simulation, the large-scale atomic/molecular massively parallel simulator (LAMMPS) integrated materials engine (LIME). Heuristics and algorithms employed for equation of state (EOS) calculation using a particle-based model of a molecular crystal, hexahydro-1,3,5-trinitro-s-triazine (RDX), are described in detail. The simulation method for the particle-based model is energy-conserving dissipative particle dynamics, but the techniques used in LIME are generally applicable to molecular dynamics simulations with a variety of particle-based models. The newly created tool set is tested through use of its EOS data in plate impact and Taylor anvil impact continuum simulations of solid RDX. The coarse-grain model results from LIME provide an approach to bridge the scales from atomistic simulations to continuum simulations.

  17. Novel Discrete Element Method for 3D non-spherical granular particles.

    NASA Astrophysics Data System (ADS)

    Seelen, Luuk; Padding, Johan; Kuipers, Hans

    2015-11-01

    Granular materials are common in many industries and nature. The different properties from solid behavior to fluid like behavior are well known but less well understood. The main aim of our work is to develop a discrete element method (DEM) to simulate non-spherical granular particles. The non-spherical shape of particles is important, as it controls the behavior of the granular materials in many situations, such as static systems of packed particles. In such systems the packing fraction is determined by the particle shape. We developed a novel 3D discrete element method that simulates the particle-particle interactions for a wide variety of shapes. The model can simulate quadratic shapes such as spheres, ellipsoids, cylinders. More importantly, any convex polyhedron can be used as a granular particle shape. These polyhedrons are very well suited to represent non-rounded sand particles. The main difficulty of any non-spherical DEM is the determination of particle-particle overlap. Our model uses two iterative geometric algorithms to determine the overlap. The algorithms are robust and can also determine multiple contact points which can occur for these shapes. With this method we are able to study different applications such as the discharging of a hopper or silo. Another application the creation of a random close packing, to determine the solid volume fraction as a function of the particle shape.

  18. Dynamic load balance scheme for the DSMC algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jin; Geng, Xiangren; Jiang, Dingwu

    The direct simulation Monte Carlo (DSMC) algorithm, devised by Bird, has been used over a wide range of various rarified flow problems in the past 40 years. While the DSMC is suitable for the parallel implementation on powerful multi-processor architecture, it also introduces a large load imbalance across the processor array, even for small examples. The load imposed on a processor by a DSMC calculation is determined to a large extent by the total of simulator particles upon it. Since most flows are impulsively started with initial distribution of particles which is surely quite different from the steady state, themore » total of simulator particles will change dramatically. The load balance based upon an initial distribution of particles will break down as the steady state of flow is reached. The load imbalance and huge computational cost of DSMC has limited its application to rarefied or simple transitional flows. In this paper, by taking advantage of METIS, a software for partitioning unstructured graphs, and taking the total of simulator particles in each cell as a weight information, the repartitioning based upon the principle that each processor handles approximately the equal total of simulator particles has been achieved. The computation must pause several times to renew the total of simulator particles in each processor and repartition the whole domain again. Thus the load balance across the processors array holds in the duration of computation. The parallel efficiency can be improved effectively. The benchmark solution of a cylinder submerged in hypersonic flow has been simulated numerically. Besides, hypersonic flow past around a complex wing-body configuration has also been simulated. The results have displayed that, for both of cases, the computational time can be reduced by about 50%.« less

  19. Smoothed Particle Hydrodynamic Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-10-05

    This code is a highly modular framework for developing smoothed particle hydrodynamic (SPH) simulations running on parallel platforms. The compartmentalization of the code allows for rapid development of new SPH applications and modifications of existing algorithms. The compartmentalization also allows changes in one part of the code used by many applications to instantly be made available to all applications.

  20. Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Jianyuan; Qin, Hong; Liu, Jian

    2015-11-01

    Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint arXiv: 1505.06076 (2015)], which produces fivemore » exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave. (C) 2015 AIP Publishing LLC.« less

  1. Vectorization of a particle simulation method for hypersonic rarefied flow

    NASA Technical Reports Server (NTRS)

    Mcdonald, Jeffrey D.; Baganoff, Donald

    1988-01-01

    An efficient particle simulation technique for hypersonic rarefied flows is presented at an algorithmic and implementation level. The implementation is for a vector computer architecture, specifically the Cray-2. The method models an ideal diatomic Maxwell molecule with three translational and two rotational degrees of freedom. Algorithms are designed specifically for compatibility with fine grain parallelism by reducing the number of data dependencies in the computation. By insisting on this compatibility, the method is capable of performing simulation on a much larger scale than previously possible. A two-dimensional simulation of supersonic flow over a wedge is carried out for the near-continuum limit where the gas is in equilibrium and the ideal solution can be used as a check on the accuracy of the gas model employed in the method. Also, a three-dimensional, Mach 8, rarefied flow about a finite-span flat plate at a 45 degree angle of attack was simulated. It utilized over 10 to the 7th particles carried through 400 discrete time steps in less than one hour of Cray-2 CPU time. This problem was chosen to exhibit the capability of the method in handling a large number of particles and a true three-dimensional geometry.

  2. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.« less

  3. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    PubMed

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  4. The accurate particle tracer code

    NASA Astrophysics Data System (ADS)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  5. GROMACS 4:  Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation.

    PubMed

    Hess, Berk; Kutzner, Carsten; van der Spoel, David; Lindahl, Erik

    2008-03-01

    Molecular simulation is an extremely useful, but computationally very expensive tool for studies of chemical and biomolecular systems. Here, we present a new implementation of our molecular simulation toolkit GROMACS which now both achieves extremely high performance on single processors from algorithmic optimizations and hand-coded routines and simultaneously scales very well on parallel machines. The code encompasses a minimal-communication domain decomposition algorithm, full dynamic load balancing, a state-of-the-art parallel constraint solver, and efficient virtual site algorithms that allow removal of hydrogen atom degrees of freedom to enable integration time steps up to 5 fs for atomistic simulations also in parallel. To improve the scaling properties of the common particle mesh Ewald electrostatics algorithms, we have in addition used a Multiple-Program, Multiple-Data approach, with separate node domains responsible for direct and reciprocal space interactions. Not only does this combination of algorithms enable extremely long simulations of large systems but also it provides that simulation performance on quite modest numbers of standard cluster nodes.

  6. The Microwave Radiative Properties of Falling Snow Derived from Nonspherical Ice Particle Models. Part II: Initial Testing Using Radar, Radiometer and In Situ Observations

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Tian, Lin; Grecu, Mircea; Kuo, Kwo-Sen; Johnson, Benjamin; Heymsfield, Andrew J.; Bansemer, Aaron; Heymsfield, Gerald M.; Wang, James R.; Meneghini, Robert

    2016-01-01

    In this study, two different particle models describing the structure and electromagnetic properties of snow are developed and evaluated for potential use in satellite combined radar-radiometer precipitation estimation algorithms. In the first model, snow particles are assumed to be homogeneous ice-air spheres with single-scattering properties derived from Mie theory. In the second model, snow particles are created by simulating the self-collection of pristine ice crystals into aggregate particles of different sizes, using different numbers and habits of the collected component crystals. Single-scattering properties of the resulting nonspherical snow particles are determined using the discrete dipole approximation. The size-distribution-integrated scattering properties of the spherical and nonspherical snow particles are incorporated into a dual-wavelength radar profiling algorithm that is applied to 14- and 34-GHz observations of stratiform precipitation from the ER-2 aircraft-borne High-Altitude Imaging Wind and Rain Airborne Profiler (HIWRAP) radar. The retrieved ice precipitation profiles are then input to a forward radiative transfer calculation in an attempt to simulate coincident radiance observations from the Conical Scanning Millimeter-Wave Imaging Radiometer (CoSMIR). Much greater consistency between the simulated and observed CoSMIR radiances is obtained using estimated profiles that are based upon the nonspherical crystal/aggregate snow particle model. Despite this greater consistency, there remain some discrepancies between the higher moments of the HIWRAP-retrieved precipitation size distributions and in situ distributions derived from microphysics probe observations obtained from Citation aircraft underflights of the ER-2. These discrepancies can only be eliminated if a subset of lower-density crystal/aggregate snow particles is assumed in the radar algorithm and in the interpretation of the in situ data.

  7. A Food Chain Algorithm for Capacitated Vehicle Routing Problem with Recycling in Reverse Logistics

    NASA Astrophysics Data System (ADS)

    Song, Qiang; Gao, Xuexia; Santos, Emmanuel T.

    2015-12-01

    This paper introduces the capacitated vehicle routing problem with recycling in reverse logistics, and designs a food chain algorithm for it. Some illustrative examples are selected to conduct simulation and comparison. Numerical results show that the performance of the food chain algorithm is better than the genetic algorithm, particle swarm optimization as well as quantum evolutionary algorithm.

  8. Application of a novel particle tracking algorithm in the flow visualization of an artificial abdominal aortic aneurysm.

    PubMed

    Zhang, Yang; Wang, Yuan; He, Wenbo; Yang, Bin

    2014-01-01

    A novel Particle Tracking Velocimetry (PTV) algorithm based on Voronoi Diagram (VD) is proposed and briefed as VD-PTV. The robustness of VD-PTV for pulsatile flow is verified through a test that includes a widely used artificial flow and a classic reference algorithm. The proposed algorithm is then applied to visualize the flow in an artificial abdominal aortic aneurysm included in a pulsatile circulation system that simulates the aortic blood flow in human body. Results show that, large particles tend to gather at the upstream boundary because of the backflow eddies that follow the pulsation. This qualitative description, together with VD-PTV, has laid a foundation for future works that demand high-level quantification.

  9. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  10. About improving efficiency of the P3 M algorithms when computing the inter-particle forces in beam dynamics

    NASA Astrophysics Data System (ADS)

    Kozynchenko, Alexander I.; Kozynchenko, Sergey A.

    2017-03-01

    In the paper, a problem of improving efficiency of the particle-particle- particle-mesh (P3M) algorithm in computing the inter-particle electrostatic forces is considered. The particle-mesh (PM) part of the algorithm is modified in such a way that the space field equation is solved by the direct method of summation of potentials over the ensemble of particles lying not too close to a reference particle. For this purpose, a specific matrix "pattern" is introduced to describe the spatial field distribution of a single point charge, so the "pattern" contains pre-calculated potential values. This approach allows to reduce a set of arithmetic operations performed at the innermost of nested loops down to an addition and assignment operators and, therefore, to decrease the running time substantially. The simulation model developed in C++ substantiates this view, showing the descent accuracy acceptable in particle beam calculations together with the improved speed performance.

  11. Cell-veto Monte Carlo algorithm for long-range systems.

    PubMed

    Kapfer, Sebastian C; Krauth, Werner

    2016-09-01

    We present a rigorous efficient event-chain Monte Carlo algorithm for long-range interacting particle systems. Using a cell-veto scheme within the factorized Metropolis algorithm, we compute each single-particle move with a fixed number of operations. For slowly decaying potentials such as Coulomb interactions, screening line charges allow us to take into account periodic boundary conditions. We discuss the performance of the cell-veto Monte Carlo algorithm for general inverse-power-law potentials, and illustrate how it provides a new outlook on one of the prominent bottlenecks in large-scale atomistic Monte Carlo simulations.

  12. Turbulence dissipation challenge: particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Roytershteyn, V.; Karimabadi, H.; Omelchenko, Y.; Germaschewski, K.

    2015-12-01

    We discuss application of three particle in cell (PIC) codes to the problems relevant to turbulence dissipation challenge. VPIC is a fully kinetic code extensively used to study a variety of diverse problems ranging from laboratory plasmas to astrophysics. PSC is a flexible fully kinetic code offering a variety of algorithms that can be advantageous to turbulence simulations, including high order particle shapes, dynamic load balancing, and ability to efficiently run on Graphics Processing Units (GPUs). Finally, HYPERS is a novel hybrid (kinetic ions+fluid electrons) code, which utilizes asynchronous time advance and a number of other advanced algorithms. We present examples drawn both from large-scale turbulence simulations and from the test problems outlined by the turbulence dissipation challenge. Special attention is paid to such issues as the small-scale intermittency of inertial range turbulence, mode content of the sub-proton range of scales, the formation of electron-scale current sheets and the role of magnetic reconnection, as well as numerical challenges of applying PIC codes to simulations of astrophysical turbulence.

  13. Kassiopeia: a modern, extensible C++ particle tracking package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furse, Daniel; Groh, Stefan; Trost, Nikolaus

    The Kassiopeia particle tracking framework is an object-oriented software package using modern C++ techniques, written originally to meet the needs of the KATRIN collaboration. Kassiopeia features a new algorithmic paradigm for particle tracking simulations which targets experiments containing complex geometries and electromagnetic fields, with high priority put on calculation efficiency, customizability, extensibility, and ease-of-use for novice programmers. To solve Kassiopeia's target physics problem the software is capable of simulating particle trajectories governed by arbitrarily complex differential equations of motion, continuous physics processes that may in part be modeled as terms perturbing that equation of motion, stochastic processes that occur inmore » flight such as bulk scattering and decay, and stochastic surface processes occurring at interfaces, including transmission and reflection effects. This entire set of computations takes place against the backdrop of a rich geometry package which serves a variety of roles, including initialization of electromagnetic field simulations and the support of state-dependent algorithm-swapping and behavioral changes as a particle's state evolves. Thanks to the very general approach taken by Kassiopeia it can be used by other experiments facing similar challenges when calculating particle trajectories in electromagnetic fields. It is publicly available at https://github.com/KATRIN-Experiment/Kassiopeia.« less

  14. Kassiopeia: a modern, extensible C++ particle tracking package

    DOE PAGES

    Furse, Daniel; Groh, Stefan; Trost, Nikolaus; ...

    2017-05-16

    The Kassiopeia particle tracking framework is an object-oriented software package using modern C++ techniques, written originally to meet the needs of the KATRIN collaboration. Kassiopeia features a new algorithmic paradigm for particle tracking simulations which targets experiments containing complex geometries and electromagnetic fields, with high priority put on calculation efficiency, customizability, extensibility, and ease-of-use for novice programmers. To solve Kassiopeia's target physics problem the software is capable of simulating particle trajectories governed by arbitrarily complex differential equations of motion, continuous physics processes that may in part be modeled as terms perturbing that equation of motion, stochastic processes that occur inmore » flight such as bulk scattering and decay, and stochastic surface processes occurring at interfaces, including transmission and reflection effects. This entire set of computations takes place against the backdrop of a rich geometry package which serves a variety of roles, including initialization of electromagnetic field simulations and the support of state-dependent algorithm-swapping and behavioral changes as a particle's state evolves. Thanks to the very general approach taken by Kassiopeia it can be used by other experiments facing similar challenges when calculating particle trajectories in electromagnetic fields. It is publicly available at https://github.com/KATRIN-Experiment/Kassiopeia.« less

  15. Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm

    PubMed Central

    Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong

    2016-01-01

    In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis. PMID:27959895

  16. Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm.

    PubMed

    Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong

    2016-01-01

    In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis.

  17. GPU-accelerated algorithms for many-particle continuous-time quantum walks

    NASA Astrophysics Data System (ADS)

    Piccinini, Enrico; Benedetti, Claudia; Siloi, Ilaria; Paris, Matteo G. A.; Bordone, Paolo

    2017-06-01

    Many-particle continuous-time quantum walks (CTQWs) represent a resource for several tasks in quantum technology, including quantum search algorithms and universal quantum computation. In order to design and implement CTQWs in a realistic scenario, one needs effective simulation tools for Hamiltonians that take into account static noise and fluctuations in the lattice, i.e. Hamiltonians containing stochastic terms. To this aim, we suggest a parallel algorithm based on the Taylor series expansion of the evolution operator, and compare its performances with those of algorithms based on the exact diagonalization of the Hamiltonian or a 4th order Runge-Kutta integration. We prove that both Taylor-series expansion and Runge-Kutta algorithms are reliable and have a low computational cost, the Taylor-series expansion showing the additional advantage of a memory allocation not depending on the precision of calculation. Both algorithms are also highly parallelizable within the SIMT paradigm, and are thus suitable for GPGPU computing. In turn, we have benchmarked 4 NVIDIA GPUs and 3 quad-core Intel CPUs for a 2-particle system over lattices of increasing dimension, showing that the speedup provided by GPU computing, with respect to the OPENMP parallelization, lies in the range between 8x and (more than) 20x, depending on the frequency of post-processing. GPU-accelerated codes thus allow one to overcome concerns about the execution time, and make it possible simulations with many interacting particles on large lattices, with the only limit of the memory available on the device.

  18. A joint swarm intelligence algorithm for multi-user detection in MIMO-OFDM system

    NASA Astrophysics Data System (ADS)

    Hu, Fengye; Du, Dakun; Zhang, Peng; Wang, Zhijun

    2014-11-01

    In the multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) system, traditional multi-user detection (MUD) algorithms that usually used to suppress multiple access interference are difficult to balance system detection performance and the complexity of the algorithm. To solve this problem, this paper proposes a joint swarm intelligence algorithm called Ant Colony and Particle Swarm Optimisation (AC-PSO) by integrating particle swarm optimisation (PSO) and ant colony optimisation (ACO) algorithms. According to simulation results, it has been shown that, with low computational complexity, the MUD for the MIMO-OFDM system based on AC-PSO algorithm gains comparable MUD performance with maximum likelihood algorithm. Thus, the proposed AC-PSO algorithm provides a satisfactory trade-off between computational complexity and detection performance.

  19. Multiscale simulations of anisotropic particles combining molecular dynamics and Green's function reaction dynamics

    NASA Astrophysics Data System (ADS)

    Vijaykumar, Adithya; Ouldridge, Thomas E.; ten Wolde, Pieter Rein; Bolhuis, Peter G.

    2017-03-01

    The modeling of complex reaction-diffusion processes in, for instance, cellular biochemical networks or self-assembling soft matter can be tremendously sped up by employing a multiscale algorithm which combines the mesoscopic Green's Function Reaction Dynamics (GFRD) method with explicit stochastic Brownian, Langevin, or deterministic molecular dynamics to treat reactants at the microscopic scale [A. Vijaykumar, P. G. Bolhuis, and P. R. ten Wolde, J. Chem. Phys. 143, 214102 (2015)]. Here we extend this multiscale MD-GFRD approach to include the orientational dynamics that is crucial to describe the anisotropic interactions often prevalent in biomolecular systems. We present the novel algorithm focusing on Brownian dynamics only, although the methodology is generic. We illustrate the novel algorithm using a simple patchy particle model. After validation of the algorithm, we discuss its performance. The rotational Brownian dynamics MD-GFRD multiscale method will open up the possibility for large scale simulations of protein signalling networks.

  20. Microphysical particle properties derived from inversion algorithms developed in the framework of EARLINET

    NASA Astrophysics Data System (ADS)

    Müller, Detlef; Böckmann, Christine; Kolgotin, Alexei; Schneidenbach, Lars; Chemyakin, Eduard; Rosemann, Julia; Znak, Pavel; Romanov, Anton

    2016-10-01

    We present a summary on the current status of two inversion algorithms that are used in EARLINET (European Aerosol Research Lidar Network) for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on a manually controlled inversion of optical data which allows for detailed sensitivity studies. The algorithms allow us to derive particle effective radius as well as volume and surface area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light absorption needs to be known with high accuracy. It is an extreme challenge to retrieve the real part with an accuracy better than 0.05 and the imaginary part with accuracy better than 0.005-0.1 or ±50 %. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high- and low-absorbing aerosols. On the basis of a few exemplary simulations with synthetic optical data we discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work. One algorithm was used with the purpose of testing how well microphysical parameters can be derived if the real part of the complex refractive index is known to at least 0.05 or 0.1. The other algorithm was used to find out how well microphysical parameters can be derived if this constraint for the real part is not applied. The optical data used in our study cover a range of Ångström exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested aerosol scenarios that are considered highly unlikely, e.g. the lidar ratios fall outside the commonly accepted range of values measured with Raman lidar, even though the underlying microphysical particle properties are not uncommon. The goal of this part of the study is to test the robustness of the algorithms towards their ability to identify aerosol types that have not been measured so far, but cannot be ruled out based on our current knowledge of aerosol physics. We computed the optical data from monomodal logarithmic particle size distributions, i.e. we explicitly excluded the more complicated case of bimodal particle size distributions which is a topic of ongoing research work. Another constraint is that we only considered particles of spherical shape in our simulations. We considered particle radii as large as 7-10 µm in our simulations where the Potsdam algorithm is limited to the lower value. We considered optical-data errors of 15 % in the simulation studies. We target 50 % uncertainty as a reasonable threshold for our data products, though we attempt to obtain data products with less uncertainty in future work.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Jianyuan; Liu, Jian; He, Yang

    Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint http://arxiv.org/abs/arXiv:1505.06076 (2015)], which produces five exactlymore » soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave.« less

  2. Efficient Green's Function Reaction Dynamics (GFRD) simulations for diffusion-limited, reversible reactions

    NASA Astrophysics Data System (ADS)

    Bashardanesh, Zahedeh; Lötstedt, Per

    2018-03-01

    In diffusion controlled reversible bimolecular reactions in three dimensions, a dissociation step is typically followed by multiple, rapid re-association steps slowing down the simulations of such systems. In order to improve the efficiency, we first derive an exact Green's function describing the rate at which an isolated pair of particles undergoing reversible bimolecular reactions and unimolecular decay separates beyond an arbitrarily chosen distance. Then the Green's function is used in an algorithm for particle-based stochastic reaction-diffusion simulations for prediction of the dynamics of biochemical networks. The accuracy and efficiency of the algorithm are evaluated using a reversible reaction and a push-pull chemical network. The computational work is independent of the rates of the re-associations.

  3. SHARP: A Spatially Higher-order, Relativistic Particle-in-cell Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shalaby, Mohamad; Broderick, Avery E.; Chang, Philip

    Numerical heating in particle-in-cell (PIC) codes currently precludes the accurate simulation of cold, relativistic plasma over long periods, severely limiting their applications in astrophysical environments. We present a spatially higher-order accurate relativistic PIC algorithm in one spatial dimension, which conserves charge and momentum exactly. We utilize the smoothness implied by the usage of higher-order interpolation functions to achieve a spatially higher-order accurate algorithm (up to the fifth order). We validate our algorithm against several test problems—thermal stability of stationary plasma, stability of linear plasma waves, and two-stream instability in the relativistic and non-relativistic regimes. Comparing our simulations to exact solutionsmore » of the dispersion relations, we demonstrate that SHARP can quantitatively reproduce important kinetic features of the linear regime. Our simulations have a superior ability to control energy non-conservation and avoid numerical heating in comparison to common second-order schemes. We provide a natural definition for convergence of a general PIC algorithm: the complement of physical modes captured by the simulation, i.e., those that lie above the Poisson noise, must grow commensurately with the resolution. This implies that it is necessary to simultaneously increase the number of particles per cell and decrease the cell size. We demonstrate that traditional ways for testing for convergence fail, leading to plateauing of the energy error. This new PIC code enables us to faithfully study the long-term evolution of plasma problems that require absolute control of the energy and momentum conservation.« less

  4. Exact Hybrid Particle/Population Simulation of Rule-Based Models of Biochemical Systems

    PubMed Central

    Stover, Lori J.; Nair, Niketh S.; Faeder, James R.

    2014-01-01

    Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This “network-free” approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of “partial network expansion” into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility. PMID:24699269

  5. Exact hybrid particle/population simulation of rule-based models of biochemical systems.

    PubMed

    Hogg, Justin S; Harris, Leonard A; Stover, Lori J; Nair, Niketh S; Faeder, James R

    2014-04-01

    Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility.

  6. On the Green's function of the partially diffusion-controlled reversible ABCD reaction for radiation chemistry codes

    NASA Astrophysics Data System (ADS)

    Plante, Ianik; Devroye, Luc

    2015-09-01

    Several computer codes simulating chemical reactions in particles systems are based on the Green's functions of the diffusion equation (GFDE). Indeed, many types of chemical systems have been simulated using the exact GFDE, which has also become the gold standard for validating other theoretical models. In this work, a simulation algorithm is presented to sample the interparticle distance for partially diffusion-controlled reversible ABCD reaction. This algorithm is considered exact for 2-particles systems, is faster than conventional look-up tables and uses only a few kilobytes of memory. The simulation results obtained with this method are compared with those obtained with the independent reaction times (IRT) method. This work is part of our effort in developing models to understand the role of chemical reactions in the radiation effects on cells and tissues and may eventually be included in event-based models of space radiation risks. However, as many reactions are of this type in biological systems, this algorithm might play a pivotal role in future simulation programs not only in radiation chemistry, but also in the simulation of biochemical networks in time and space as well.

  7. Dispersion Analysis Using Particle Tracking Simulations Through Heterogeneity Based on Outcrop Lidar Imagery

    NASA Astrophysics Data System (ADS)

    Klise, K. A.; Weissmann, G. S.; McKenna, S. A.; Tidwell, V. C.; Frechette, J. D.; Wawrzyniec, T. F.

    2007-12-01

    Solute plumes are believed to disperse in a non-Fickian manner due to small-scale heterogeneity and variable velocities that create preferential pathways. In order to accurately predict dispersion in naturally complex geologic media, the connection between heterogeneity and dispersion must be better understood. Since aquifer properties can not be measured at every location, it is common to simulate small-scale heterogeneity with random field generators based on a two-point covariance (e.g., through use of sequential simulation algorithms). While these random fields can produce preferential flow pathways, it is unknown how well the results simulate solute dispersion through natural heterogeneous media. To evaluate the influence that complex heterogeneity has on dispersion, we utilize high-resolution terrestrial lidar to identify and model lithofacies from outcrop for application in particle tracking solute transport simulations using RWHet. The lidar scan data are used to produce a lab (meter) scale two-dimensional model that captures 2-8 mm scale natural heterogeneity. Numerical simulations utilize various methods to populate the outcrop structure captured by the lidar-based image with reasonable hydraulic conductivity values. The particle tracking simulations result in residence time distributions used to evaluate the nature of dispersion through complex media. Particle tracking simulations through conductivity fields produced from the lidar images are then compared to particle tracking simulations through hydraulic conductivity fields produced from sequential simulation algorithms. Based on this comparison, the study aims to quantify the difference in dispersion when using realistic and simplified representations of aquifer heterogeneity. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  8. Parameter Estimation of Fractional-Order Chaotic Systems by Using Quantum Parallel Particle Swarm Optimization Algorithm

    PubMed Central

    Huang, Yu; Guo, Feng; Li, Yongling; Liu, Yufeng

    2015-01-01

    Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO) is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm. PMID:25603158

  9. On Efficient Multigrid Methods for Materials Processing Flows with Small Particles

    NASA Technical Reports Server (NTRS)

    Thomas, James (Technical Monitor); Diskin, Boris; Harik, VasylMichael

    2004-01-01

    Multiscale modeling of materials requires simulations of multiple levels of structural hierarchy. The computational efficiency of numerical methods becomes a critical factor for simulating large physical systems with highly desperate length scales. Multigrid methods are known for their superior efficiency in representing/resolving different levels of physical details. The efficiency is achieved by employing interactively different discretizations on different scales (grids). To assist optimization of manufacturing conditions for materials processing with numerous particles (e.g., dispersion of particles, controlling flow viscosity and clusters), a new multigrid algorithm has been developed for a case of multiscale modeling of flows with small particles that have various length scales. The optimal efficiency of the algorithm is crucial for accurate predictions of the effect of processing conditions (e.g., pressure and velocity gradients) on the local flow fields that control the formation of various microstructures or clusters.

  10. A versatile model for soft patchy particles with various patch arrangements.

    PubMed

    Li, Zhan-Wei; Zhu, You-Liang; Lu, Zhong-Yuan; Sun, Zhao-Yan

    2016-01-21

    We propose a simple and general mesoscale soft patchy particle model, which can felicitously describe the deformable and surface-anisotropic characteristics of soft patchy particles. This model can be used in dynamics simulations to investigate the aggregation behavior and mechanism of various types of soft patchy particles with tunable number, size, direction, and geometrical arrangement of the patches. To improve the computational efficiency of this mesoscale model in dynamics simulations, we give the simulation algorithm that fits the compute unified device architecture (CUDA) framework of NVIDIA graphics processing units (GPUs). The validation of the model and the performance of the simulations using GPUs are demonstrated by simulating several benchmark systems of soft patchy particles with 1 to 4 patches in a regular geometrical arrangement. Because of its simplicity and computational efficiency, the soft patchy particle model will provide a powerful tool to investigate the aggregation behavior of soft patchy particles, such as patchy micelles, patchy microgels, and patchy dendrimers, over larger spatial and temporal scales.

  11. PICsar: Particle in cell pulsar magnetosphere simulator

    NASA Astrophysics Data System (ADS)

    Belyaev, Mikhail A.

    2016-07-01

    PICsar simulates the magnetosphere of an aligned axisymmetric pulsar and can be used to simulate other arbitrary electromagnetics problems in axisymmetry. Written in Fortran, this special relativistic, electromagnetic, charge conservative particle in cell code features stretchable body-fitted coordinates that follow the surface of a sphere, simplifying the application of boundary conditions in the case of the aligned pulsar; a radiation absorbing outer boundary, which allows a steady state to be set up dynamically and maintained indefinitely from transient initial conditions; and algorithms for injection of charged particles into the simulation domain. PICsar is parallelized using MPI and has been used on research problems with 1000 CPUs.

  12. ML-Space: Hybrid Spatial Gillespie and Particle Simulation of Multi-Level Rule-Based Models in Cell Biology.

    PubMed

    Bittig, Arne T; Uhrmacher, Adelinde M

    2017-01-01

    Spatio-temporal dynamics of cellular processes can be simulated at different levels of detail, from (deterministic) partial differential equations via the spatial Stochastic Simulation algorithm to tracking Brownian trajectories of individual particles. We present a spatial simulation approach for multi-level rule-based models, which includes dynamically hierarchically nested cellular compartments and entities. Our approach ML-Space combines discrete compartmental dynamics, stochastic spatial approaches in discrete space, and particles moving in continuous space. The rule-based specification language of ML-Space supports concise and compact descriptions of models and to adapt the spatial resolution of models easily.

  13. A Novel Particle Swarm Optimization Approach for Grid Job Scheduling

    NASA Astrophysics Data System (ADS)

    Izakian, Hesam; Tork Ladani, Behrouz; Zamanifar, Kamran; Abraham, Ajith

    This paper represents a Particle Swarm Optimization (PSO) algorithm, for grid job scheduling. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. In this paper we used a PSO approach for grid job scheduling. The scheduler aims at minimizing makespan and flowtime simultaneously. Experimental studies show that the proposed novel approach is more efficient than the PSO approach reported in the literature.

  14. An efficient mixed-precision, hybrid CPU-GPU implementation of a nonlinearly implicit one-dimensional particle-in-cell algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guangye; Chacon, Luis; Barnes, Daniel C

    2012-01-01

    Recently, a fully implicit, energy- and charge-conserving particle-in-cell method has been developed for multi-scale, full-f kinetic simulations [G. Chen, et al., J. Comput. Phys. 230, 18 (2011)]. The method employs a Jacobian-free Newton-Krylov (JFNK) solver and is capable of using very large timesteps without loss of numerical stability or accuracy. A fundamental feature of the method is the segregation of particle orbit integrations from the field solver, while remaining fully self-consistent. This provides great flexibility, and dramatically improves the solver efficiency by reducing the degrees of freedom of the associated nonlinear system. However, it requires a particle push per nonlinearmore » residual evaluation, which makes the particle push the most time-consuming operation in the algorithm. This paper describes a very efficient mixed-precision, hybrid CPU-GPU implementation of the implicit PIC algorithm. The JFNK solver is kept on the CPU (in double precision), while the inherent data parallelism of the particle mover is exploited by implementing it in single-precision on a graphics processing unit (GPU) using CUDA. Performance-oriented optimizations, with the aid of an analytical performance model, the roofline model, are employed. Despite being highly dynamic, the adaptive, charge-conserving particle mover algorithm achieves up to 300 400 GOp/s (including single-precision floating-point, integer, and logic operations) on a Nvidia GeForce GTX580, corresponding to 20 25% absolute GPU efficiency (against the peak theoretical performance) and 50-70% intrinsic efficiency (against the algorithm s maximum operational throughput, which neglects all latencies). This is about 200-300 times faster than an equivalent serial CPU implementation. When the single-precision GPU particle mover is combined with a double-precision CPU JFNK field solver, overall performance gains 100 vs. the double-precision CPU-only serial version are obtained, with no apparent loss of robustness or accuracy when applied to a challenging long-time scale ion acoustic wave simulation.« less

  15. An assessment of 'shuffle algorithm' collision mechanics for particle simulations

    NASA Technical Reports Server (NTRS)

    Feiereisen, William J.; Boyd, Iain D.

    1991-01-01

    Among the algorithms for collision mechanics used at present, the 'shuffle algorithm' of Baganoff (McDonald and Baganoff, 1988; Baganoff and McDonald, 1990) not only allows efficient vectorization, but also discretizes the possible outcomes of a collision. To assess the applicability of the shuffle algorithm, a simulation was performed of flows in monoatomic gases and the calculated characteristics of shock waves was compared with those obtained using a commonly employed isotropic scattering law. It is shown that, in general, the shuffle algorithm adequately represents the collision mechanics in cases when the goal of calculations are mean profiles of density and temperature.

  16. Quantum Simulation of Tunneling in Small Systems

    PubMed Central

    Sornborger, Andrew T.

    2012-01-01

    A number of quantum algorithms have been performed on small quantum computers; these include Shor's prime factorization algorithm, error correction, Grover's search algorithm and a number of analog and digital quantum simulations. Because of the number of gates and qubits necessary, however, digital quantum particle simulations remain untested. A contributing factor to the system size required is the number of ancillary qubits needed to implement matrix exponentials of the potential operator. Here, we show that a set of tunneling problems may be investigated with no ancillary qubits and a cost of one single-qubit operator per time step for the potential evolution, eliminating at least half of the quantum gates required for the algorithm and more than that in the general case. Such simulations are within reach of current quantum computer architectures. PMID:22916333

  17. Automatic Beam Path Analysis of Laser Wakefield Particle Acceleration Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Geddes, Cameron G.R.; Cormier-Michel, Estelle

    2009-10-19

    Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps,more » then integrating features across timesteps, and based on the information derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than hasbeen possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.« less

  18. Kassiopeia: a modern, extensible C++ particle tracking package

    NASA Astrophysics Data System (ADS)

    Furse, Daniel; Groh, Stefan; Trost, Nikolaus; Babutzka, Martin; Barrett, John P.; Behrens, Jan; Buzinsky, Nicholas; Corona, Thomas; Enomoto, Sanshiro; Erhard, Moritz; Formaggio, Joseph A.; Glück, Ferenc; Harms, Fabian; Heizmann, Florian; Hilk, Daniel; Käfer, Wolfgang; Kleesiek, Marco; Leiber, Benjamin; Mertens, Susanne; Oblath, Noah S.; Renschler, Pascal; Schwarz, Johannes; Slocum, Penny L.; Wandkowsky, Nancy; Wierman, Kevin; Zacher, Michael

    2017-05-01

    The Kassiopeia particle tracking framework is an object-oriented software package using modern C++ techniques, written originally to meet the needs of the KATRIN collaboration. Kassiopeia features a new algorithmic paradigm for particle tracking simulations which targets experiments containing complex geometries and electromagnetic fields, with high priority put on calculation efficiency, customizability, extensibility, and ease-of-use for novice programmers. To solve Kassiopeia's target physics problem the software is capable of simulating particle trajectories governed by arbitrarily complex differential equations of motion, continuous physics processes that may in part be modeled as terms perturbing that equation of motion, stochastic processes that occur in flight such as bulk scattering and decay, and stochastic surface processes occurring at interfaces, including transmission and reflection effects. This entire set of computations takes place against the backdrop of a rich geometry package which serves a variety of roles, including initialization of electromagnetic field simulations and the support of state-dependent algorithm-swapping and behavioral changes as a particle’s state evolves. Thanks to the very general approach taken by Kassiopeia it can be used by other experiments facing similar challenges when calculating particle trajectories in electromagnetic fields. It is publicly available at https://github.com/KATRIN-Experiment/Kassiopeia.

  19. The accurate particle tracer code

    DOE PAGES

    Wang, Yulei; Liu, Jian; Qin, Hong; ...

    2017-07-20

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less

  20. The accurate particle tracer code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yulei; Liu, Jian; Qin, Hong

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less

  1. High-precision tracking of brownian boomerang colloidal particles confined in quasi two dimensions.

    PubMed

    Chakrabarty, Ayan; Wang, Feng; Fan, Chun-Zhen; Sun, Kai; Wei, Qi-Huo

    2013-11-26

    In this article, we present a high-precision image-processing algorithm for tracking the translational and rotational Brownian motion of boomerang-shaped colloidal particles confined in quasi-two-dimensional geometry. By measuring mean square displacements of an immobilized particle, we demonstrate that the positional and angular precision of our imaging and image-processing system can achieve 13 nm and 0.004 rad, respectively. By analyzing computer-simulated images, we demonstrate that the positional and angular accuracies of our image-processing algorithm can achieve 32 nm and 0.006 rad. Because of zero correlations between the displacements in neighboring time intervals, trajectories of different videos of the same particle can be merged into a very long time trajectory, allowing for long-time averaging of different physical variables. We apply this image-processing algorithm to measure the diffusion coefficients of boomerang particles of three different apex angles and discuss the angle dependence of these diffusion coefficients.

  2. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Zheng, E-mail: 19994035@sina.com; Wang, Jun; Zhou, Bihua

    2014-03-15

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented tomore » tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.« less

  3. Classification of nanoparticle diffusion processes in vital cells by a multifeature random forests approach: application to simulated data, darkfield, and confocal laser scanning microscopy

    NASA Astrophysics Data System (ADS)

    Wagner, Thorsten; Kroll, Alexandra; Wiemann, Martin; Lipinski, Hans-Gerd

    2016-04-01

    Darkfield and confocal laser scanning microscopy both allow for a simultaneous observation of live cells and single nanoparticles. Accordingly, a characterization of nanoparticle uptake and intracellular mobility appears possible within living cells. Single particle tracking makes it possible to characterize the particle and the surrounding cell. In case of free diffusion, the mean squared displacement for each trajectory of a nanoparticle can be measured which allows computing the corresponding diffusion coefficient and, if desired, converting it into the hydrodynamic diameter using the Stokes-Einstein equation and the viscosity of the fluid. However, within the more complex system of a cell's cytoplasm unrestrained diffusion is scarce and several other types of movements may occur. Thus, confined or anomalous diffusion (e.g. diffusion in porous media), active transport, and combinations thereof were described by several authors. To distinguish between these types of particle movement we developed an appropriate classification method, and simulated three types of particle motion in a 2D plane using a Monte Carlo approach: (1) normal diffusion, using random direction and step-length, (2) subdiffusion, using confinements like a reflective boundary with defined radius or reflective objects in the closer vicinity, and (3) superdiffusion, using a directed flow added to the normal diffusion. To simulate subdiffusion we devised a new method based on tracks of different length combined with equally probable obstacle interaction. Next we estimated the fractal dimension, elongation and the ratio of long-time / short-time diffusion coefficients. These features were used to train a random forests classification algorithm. The accuracy for simulated trajectories with 180 steps was 97% (95%-CI: 0.9481-0.9884). The balanced accuracy was 94%, 99% and 98% for normal-, sub- and superdiffusion, respectively. Nanoparticle tracking analysis was used with 100 nm polystyrene particles to get trajectories for normal diffusion. As a next step we identified diffusion types of nanoparticles in vital cells and incubated V79 fibroblasts with 50 nm gold nanoparticles, which appeared as intensely bright objects due to their surface plasmon resonance. The movement of particles in both the extracellular and intracellular space was observed by dark field and confocal laser scanning microscopy. After reducing background noise from the video it became possible to identify individual particle spots by a maximum detection algorithm and trace them using the robust single-particle tracking algorithm proposed by Jaqaman, which is able to handle motion heterogeneity and particle disappearance. The particle trajectories inside cells indicated active transport (superdiffusion) as well as subdiffusion. Eventually, the random forest classification algorithm, after being trained by the above simulations, successfully classified the trajectories observed in live cells.

  4. Implementation and performance of FDPS: a framework for developing parallel particle simulation codes

    NASA Astrophysics Data System (ADS)

    Iwasawa, Masaki; Tanikawa, Ataru; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro

    2016-08-01

    We present the basic idea, implementation, measured performance, and performance model of FDPS (Framework for Developing Particle Simulators). FDPS is an application-development framework which helps researchers to develop simulation programs using particle methods for large-scale distributed-memory parallel supercomputers. A particle-based simulation program for distributed-memory parallel computers needs to perform domain decomposition, exchange of particles which are not in the domain of each computing node, and gathering of the particle information in other nodes which are necessary for interaction calculation. Also, even if distributed-memory parallel computers are not used, in order to reduce the amount of computation, algorithms such as the Barnes-Hut tree algorithm or the Fast Multipole Method should be used in the case of long-range interactions. For short-range interactions, some methods to limit the calculation to neighbor particles are required. FDPS provides all of these functions which are necessary for efficient parallel execution of particle-based simulations as "templates," which are independent of the actual data structure of particles and the functional form of the particle-particle interaction. By using FDPS, researchers can write their programs with the amount of work necessary to write a simple, sequential and unoptimized program of O(N2) calculation cost, and yet the program, once compiled with FDPS, will run efficiently on large-scale parallel supercomputers. A simple gravitational N-body program can be written in around 120 lines. We report the actual performance of these programs and the performance model. The weak scaling performance is very good, and almost linear speed-up was obtained for up to the full system of the K computer. The minimum calculation time per timestep is in the range of 30 ms (N = 107) to 300 ms (N = 109). These are currently limited by the time for the calculation of the domain decomposition and communication necessary for the interaction calculation. We discuss how we can overcome these bottlenecks.

  5. A comparative study of AGN feedback algorithms

    NASA Astrophysics Data System (ADS)

    Wurster, J.; Thacker, R. J.

    2013-05-01

    Modelling active galactic nuclei (AGN) feedback in numerical simulations is both technically and theoretically challenging, with numerous approaches having been published in the literature. We present a study of five distinct approaches to modelling AGN feedback within gravitohydrodynamic simulations of major mergers of Milky Way-sized galaxies. To constrain differences to only be between AGN feedback models, all simulations start from the same initial conditions and use the same star formation algorithm. Most AGN feedback algorithms have five key aspects: the black hole accretion rate, energy feedback rate and method, particle accretion algorithm, black hole advection algorithm and black hole merger algorithm. All models follow different accretion histories, and in some cases, accretion rates differ by up to three orders of magnitude at any given time. We consider models with either thermal or kinetic feedback, with the associated energy deposited locally around the black hole. Each feedback algorithm modifies the region around the black hole to different extents, yielding gas densities and temperatures within r ˜ 200 pc that differ by up to six orders of magnitude at any given time. The particle accretion algorithms usually maintain good agreement between the total mass accreted by dot{M} dt and the total mass of gas particles removed from the simulation, although not all algorithms guarantee this to be true. The black hole advection algorithms dampen inappropriate dragging of the black holes by two-body interactions. Advecting the black hole a limited distance based upon local mass distributions has many desirably properties, such as avoiding large artificial jumps and allowing the possibility of the black hole remaining in a gas void. Lastly, two black holes instantly merge when given criteria are met, and we find a range of merger times for different criteria. This is important since the AGN feedback rate changes across the merger in a way that is dependent on the specific accretion algorithm used. Using the MBH-σ relation as a diagnostic of the remnants yields three models that lie within the one-sigma scatter of the observed relation and two that fall below the expected relation. The wide variation in accretion behaviours of the models reinforces the fact that there remains much to be learnt about the evolution of galactic nuclei.

  6. Advanced particle-in-cell simulation techniques for modeling the Lockheed Martin Compact Fusion Reactor

    NASA Astrophysics Data System (ADS)

    Welch, Dale; Font, Gabriel; Mitchell, Robert; Rose, David

    2017-10-01

    We report on particle-in-cell developments of the study of the Compact Fusion Reactor. Millisecond, two and three-dimensional simulations (cubic meter volume) of confinement and neutral beam heating of the magnetic confinement device requires accurate representation of the complex orbits, near perfect energy conservation, and significant computational power. In order to determine initial plasma fill and neutral beam heating, these simulations include ionization, elastic and charge exchange hydrogen reactions. To this end, we are pursuing fast electromagnetic kinetic modeling algorithms including a two implicit techniques and a hybrid quasi-neutral algorithm with kinetic ions. The kinetic modeling includes use of the Poisson-corrected direct implicit, magnetic implicit, as well as second-order cloud-in-cell techniques. The hybrid algorithm, ignoring electron inertial effects, is two orders of magnitude faster than kinetic but not as accurate with respect to confinement. The advantages and disadvantages of these techniques will be presented. Funded by Lockheed Martin.

  7. Evaluation of stochastic differential equation approximation of ion channel gating models.

    PubMed

    Bruce, Ian C

    2009-04-01

    Fox and Lu derived an algorithm based on stochastic differential equations for approximating the kinetics of ion channel gating that is simpler and faster than "exact" algorithms for simulating Markov process models of channel gating. However, the approximation may not be sufficiently accurate to predict statistics of action potential generation in some cases. The objective of this study was to develop a framework for analyzing the inaccuracies and determining their origin. Simulations of a patch of membrane with voltage-gated sodium and potassium channels were performed using an exact algorithm for the kinetics of channel gating and the approximate algorithm of Fox & Lu. The Fox & Lu algorithm assumes that channel gating particle dynamics have a stochastic term that is uncorrelated, zero-mean Gaussian noise, whereas the results of this study demonstrate that in many cases the stochastic term in the Fox & Lu algorithm should be correlated and non-Gaussian noise with a non-zero mean. The results indicate that: (i) the source of the inaccuracy is that the Fox & Lu algorithm does not adequately describe the combined behavior of the multiple activation particles in each sodium and potassium channel, and (ii) the accuracy does not improve with increasing numbers of channels.

  8. Rotational Brownian Dynamics simulations of clathrin cage formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilie, Ioana M.; Briels, Wim J.; MESA+ Institute for Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede

    2014-08-14

    The self-assembly of nearly rigid proteins into ordered aggregates is well suited for modeling by the patchy particle approach. Patchy particles are traditionally simulated using Monte Carlo methods, to study the phase diagram, while Brownian Dynamics simulations would reveal insights into the assembly dynamics. However, Brownian Dynamics of rotating anisotropic particles gives rise to a number of complications not encountered in translational Brownian Dynamics. We thoroughly test the Rotational Brownian Dynamics scheme proposed by Naess and Elsgaeter [Macromol. Theory Simul. 13, 419 (2004); Naess and Elsgaeter Macromol. Theory Simul. 14, 300 (2005)], confirming its validity. We then apply the algorithmmore » to simulate a patchy particle model of clathrin, a three-legged protein involved in vesicle production from lipid membranes during endocytosis. Using this algorithm we recover time scales for cage assembly comparable to those from experiments. We also briefly discuss the undulatory dynamics of the polyhedral cage.« less

  9. Adaptively restrained molecular dynamics in LAMMPS

    NASA Astrophysics Data System (ADS)

    Kant Singh, Krishna; Redon, Stephane

    2017-07-01

    Adaptively restrained molecular dynamics (ARMD) is a recently introduced particles simulation method that switches positional degrees of freedom on and off during simulation in order to speed up calculations. In the NVE ensemble, ARMD allows users to trade between precision and speed while, in the NVT ensemble, it makes it possible to compute statistical averages faster. Despite the conceptual simplicity of the approach, however, integrating it in existing molecular dynamics packages is non-trivial, in particular since implemented potentials should a priori be rewritten to take advantage of frozen particles and achieve a speed-up. In this paper, we present novel algorithms for integrating ARMD in LAMMPS, a popular multi-purpose molecular simulation package. In particular, we demonstrate how to enable ARMD in LAMMPS without having to re-implement all available force fields. The proposed algorithms are assessed on four different benchmarks, and show how they allow us to speed up simulations up to one order of magnitude.

  10. Simple, Fast and Accurate Implementation of the Diffusion Approximation Algorithm for Stochastic Ion Channels with Multiple States

    PubMed Central

    Orio, Patricio; Soudry, Daniel

    2012-01-01

    Background The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled gating particles, while the DA was modeled using uncoupled gating particles. Implementations of DA with coupled particles, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. Main Contributions We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable – allowing an easy, transparent and efficient DA implementation, avoiding unnecessary approximations. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods, except when short time steps or low channel numbers were used. PMID:22629320

  11. Scalability Test of Multiscale Fluid-Platelet Model for Three Top Supercomputers

    PubMed Central

    Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny

    2016-01-01

    We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources. PMID:27570250

  12. Particle identification algorithms for the PANDA Endcap Disc DIRC

    NASA Astrophysics Data System (ADS)

    Schmidt, M.; Ali, A.; Belias, A.; Dzhygadlo, R.; Gerhardt, A.; Götzen, K.; Kalicy, G.; Krebs, M.; Lehmann, D.; Nerling, F.; Patsyuk, M.; Peters, K.; Schepers, G.; Schmitt, L.; Schwarz, C.; Schwiening, J.; Traxler, M.; Böhm, M.; Eyrich, W.; Lehmann, A.; Pfaffinger, M.; Uhlig, F.; Düren, M.; Etzelmüller, E.; Föhl, K.; Hayrapetyan, A.; Kreutzfeld, K.; Merle, O.; Rieke, J.; Wasem, T.; Achenbach, P.; Cardinali, M.; Hoek, M.; Lauth, W.; Schlimme, S.; Sfienti, C.; Thiel, M.

    2017-12-01

    The Endcap Disc DIRC has been developed to provide an excellent particle identification for the future PANDA experiment by separating pions and kaons up to a momentum of 4 GeV/c with a separation power of 3 standard deviations in the polar angle region from 5o to 22o. This goal will be achieved using dedicated particle identification algorithms based on likelihood methods and will be applied in an offline analysis and online event filtering. This paper evaluates the resulting PID performance using Monte-Carlo simulations to study basic single track PID as well as the analysis of complex physics channels. The online reconstruction algorithm has been tested with a Virtex4 FGPA card and optimized regarding the resulting constraints.

  13. High-Speed Particle-in-Cell Simulation Parallelized with Graphic Processing Units for Low Temperature Plasmas for Material Processing

    NASA Astrophysics Data System (ADS)

    Hur, Min Young; Verboncoeur, John; Lee, Hae June

    2014-10-01

    Particle-in-cell (PIC) simulations have high fidelity in the plasma device requiring transient kinetic modeling compared with fluid simulations. It uses less approximation on the plasma kinetics but requires many particles and grids to observe the semantic results. It means that the simulation spends lots of simulation time in proportion to the number of particles. Therefore, PIC simulation needs high performance computing. In this research, a graphic processing unit (GPU) is adopted for high performance computing of PIC simulation for low temperature discharge plasmas. GPUs have many-core processors and high memory bandwidth compared with a central processing unit (CPU). NVIDIA GeForce GPUs were used for the test with hundreds of cores which show cost-effective performance. PIC code algorithm is divided into two modules which are a field solver and a particle mover. The particle mover module is divided into four routines which are named move, boundary, Monte Carlo collision (MCC), and deposit. Overall, the GPU code solves particle motions as well as electrostatic potential in two-dimensional geometry almost 30 times faster than a single CPU code. This work was supported by the Korea Institute of Science Technology Information.

  14. Fuzzy PID control algorithm based on PSO and application in BLDC motor

    NASA Astrophysics Data System (ADS)

    Lin, Sen; Wang, Guanglong

    2017-06-01

    A fuzzy PID control algorithm is studied based on improved particle swarm optimization (PSO) to perform Brushless DC (BLDC) motor control which has high accuracy, good anti-jamming capability and steady state accuracy compared with traditional PID control. The mathematical and simulation model is established for BLDC motor by simulink software, and the speed loop of the fuzzy PID controller is designed. The simulation results show that the fuzzy PID control algorithm based on PSO has higher stability, high control precision and faster dynamic response speed.

  15. Adaptive infinite impulse response system identification using modified-interior search algorithm with Lèvy flight.

    PubMed

    Kumar, Manjeet; Rawat, Tarun Kumar; Aggarwal, Apoorva

    2017-03-01

    In this paper, a new meta-heuristic optimization technique, called interior search algorithm (ISA) with Lèvy flight is proposed and applied to determine the optimal parameters of an unknown infinite impulse response (IIR) system for the system identification problem. ISA is based on aesthetics, which is commonly used in interior design and decoration processes. In ISA, composition phase and mirror phase are applied for addressing the nonlinear and multimodal system identification problems. System identification using modified-ISA (M-ISA) based method involves faster convergence, single parameter tuning and does not require derivative information because it uses a stochastic random search using the concepts of Lèvy flight. A proper tuning of control parameter has been performed in order to achieve a balance between intensification and diversification phases. In order to evaluate the performance of the proposed method, mean square error (MSE), computation time and percentage improvement are considered as the performance measure. To validate the performance of M-ISA based method, simulations has been carried out for three benchmarked IIR systems using same order and reduced order system. Genetic algorithm (GA), particle swarm optimization (PSO), cat swarm optimization (CSO), cuckoo search algorithm (CSA), differential evolution using wavelet mutation (DEWM), firefly algorithm (FFA), craziness based particle swarm optimization (CRPSO), harmony search (HS) algorithm, opposition based harmony search (OHS) algorithm, hybrid particle swarm optimization-gravitational search algorithm (HPSO-GSA) and ISA are also used to model the same examples and simulation results are compared. Obtained results confirm the efficiency of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  16. The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava

    2016-08-01

    This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.

  17. PID controller tuning using metaheuristic optimization algorithms for benchmark problems

    NASA Astrophysics Data System (ADS)

    Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.

    2017-11-01

    This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.

  18. Performance Review of Harmony Search, Differential Evolution and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Mohan Pandey, Hari

    2017-08-01

    Metaheuristic algorithms are effective in the design of an intelligent system. These algorithms are widely applied to solve complex optimization problems, including image processing, big data analytics, language processing, pattern recognition and others. This paper presents a performance comparison of three meta-heuristic algorithms, namely Harmony Search, Differential Evolution, and Particle Swarm Optimization. These algorithms are originated altogether from different fields of meta-heuristics yet share a common objective. The standard benchmark functions are used for the simulation. Statistical tests are conducted to derive a conclusion on the performance. The key motivation to conduct this research is to categorize the computational capabilities, which might be useful to the researchers.

  19. Galilean-invariant algorithm coupling immersed moving boundary conditions and Lees-Edwards boundary conditions

    NASA Astrophysics Data System (ADS)

    Zhou, Guofeng; Wang, Limin; Wang, Xiaowei; Ge, Wei

    2011-12-01

    Many investigators have coupled the Lees-Edwards boundary conditions (LEBCs) and suspension methods in the framework of the lattice Boltzmann method to study the pure bulk properties of particle-fluid suspensions. However, these suspension methods are all link-based and are more or less exposed to the disadvantages of violating Galilean invariance. In this paper, we have coupled LEBCs with a node-based suspension method, which is demonstrated to be Galilean invariant in benchmark simulations. We use the coupled algorithm to predict the viscosity of a particle-fluid suspension at very low Reynolds number, and the simulation results are in good agreement with the semiempirical Krieger-Dougherty formula.

  20. Modeling Sediment Transport Using a Lagrangian Particle Tracking Algorithm Coupled with High-Resolution Large Eddy Simulations: a Critical Analysis of Model Limits and Sensitivity

    NASA Astrophysics Data System (ADS)

    Garcia, M. H.

    2016-12-01

    Modeling Sediment Transport Using a Lagrangian Particle Tracking Algorithm Coupled with High-Resolution Large Eddy Simulations: a Critical Analysis of Model Limits and Sensitivity Som Dutta1, Paul Fischer2, Marcelo H. Garcia11Department of Civil and Environmental Engineering, University of Illinois at Urbana-Champaign, Urbana, Il, 61801 2Department of Computer Science and Department of MechSE, University of Illinois at Urbana-Champaign, Urbana, Il, 61801 Since the seminal work of Niño and Garcia [1994], one-way coupled Lagrangian particle tracking has been used extensively for modeling sediment transport. Over time, the Lagrangian particle tracking method has been coupled with Eulerian flow simulations, ranging from Reynolds Averaged Navier-Stokes (RANS) based models to Detached Eddy Simulations (DES) [Escauriaza and Sotiropoulos, 2011]. Advent of high performance computing (HPC) platforms and faster algorithms have resulted in the work of Dutta et al. [2016], where Lagrangian particle tracking was coupled with high-resolution Large Eddy Simulations (LES) to model the complex and highly non-linear phenomenon of Bulle-Effect at diversions. Despite all the advancements in using Lagrangian particle tracking, there has not been a study that looks in detail at the limits of the model in the context of sediment transport, and also analyzes the sensitivity of the various force formulation in the force balance equation of the particles. Niño and Garcia [1994] did a similar analysis, but the vertical flow velocity distribution was modeled as the log-law. The current study extends the analysis by modeling the flow using high-resolution LES at a Reynolds number comparable to experiments of Niño et al. [1994]. Dutta et al., (2016), Large Eddy Simulation (LES) of flow and bedload transport at an idealized 90-degree diversion: insight into Bulle-Effect, River Flow 2016 - Constantinescu, Garcia & Hanes (Eds), Taylor & Francis Group, London, 101-109. Escauriaza and Sotiropoulos, (2011), Lagrangian model of bed-load transport in turbulent junction flows, Journal of Fluid Mechanics, 666,36-76. Niño and García, (1994), Gravel saltation: 2. Modeling, Water Resources Research, 30(6),1915-1924. Niño et al., (1994), Gravel saltation: 1. Experiments, Water Resources Research, 30(6), 1907-1914.

  1. Quantum Engineering of Dynamical Gauge Fields on Optical Lattices

    DTIC Science & Technology

    2016-07-08

    opens the door for exciting new research directions, such as quantum simulation of the Schwinger model and of non-Abelian models. (a) Papers...exact blocking formulas from the TRG formulation of the transfer matrix. The second is a worm algorithm. The particle number distributions obtained...a fact that can be explained by an approximate particle- hole symmetry. We have also developed a computer code suite for simulating the Abelian

  2. A Morphological Approach to the Modeling of the Cold Spray Process

    NASA Astrophysics Data System (ADS)

    Delloro, F.; Jeandin, M.; Jeulin, D.; Proudhon, H.; Faessel, M.; Bianchi, L.; Meillot, E.; Helfen, L.

    2017-12-01

    A coating buildup model was developed, the aim of which was simulating the microstructure of a tantalum coating cold sprayed onto a copper substrate. To do so, first was operated a fine characterization of the irregular tantalum powder in 3D, using x-ray microtomography and developing specific image analysis algorithms. Particles were grouped by shape in seven classes. Afterward, 3D finite element simulations of the impact of the previously observed particles were realized. To finish, a coating buildup model was developed, based on the results of finite element simulations of particle impact. In its first version, this model is limited to 2D.

  3. Acceleration of the Particle Swarm Optimization for Peierls-Nabarro modeling of dislocations in conventional and high-entropy alloys

    NASA Astrophysics Data System (ADS)

    Pei, Zongrui; Eisenbach, Markus

    2017-06-01

    Dislocations are among the most important defects in determining the mechanical properties of both conventional alloys and high-entropy alloys. The Peierls-Nabarro model supplies an efficient pathway to their geometries and mobility. The difficulty in solving the integro-differential Peierls-Nabarro equation is how to effectively avoid the local minima in the energy landscape of a dislocation core. Among the other methods to optimize the dislocation core structures, we choose the algorithm of Particle Swarm Optimization, an algorithm that simulates the social behaviors of organisms. By employing more particles (bigger swarm) and more iterative steps (allowing them to explore for longer time), the local minima can be effectively avoided. But this would require more computational cost. The advantage of this algorithm is that it is readily parallelized in modern high computing architecture. We demonstrate the performance of our parallelized algorithm scales linearly with the number of employed cores.

  4. Fluid preconditioning for Newton–Krylov-based, fully implicit, electrostatic particle-in-cell simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, G., E-mail: gchen@lanl.gov; Chacón, L.; Leibs, C.A.

    2014-02-01

    A recent proof-of-principle study proposes an energy- and charge-conserving, nonlinearly implicit electrostatic particle-in-cell (PIC) algorithm in one dimension [9]. The algorithm in the reference employs an unpreconditioned Jacobian-free Newton–Krylov method, which ensures nonlinear convergence at every timestep (resolving the dynamical timescale of interest). Kinetic enslavement, which is one key component of the algorithm, not only enables fully implicit PIC as a practical approach, but also allows preconditioning the kinetic solver with a fluid approximation. This study proposes such a preconditioner, in which the linearized moment equations are closed with moments computed from particles. Effective acceleration of the linear GMRES solvemore » is demonstrated, on both uniform and non-uniform meshes. The algorithm performance is largely insensitive to the electron–ion mass ratio. Numerical experiments are performed on a 1D multi-scale ion acoustic wave test problem.« less

  5. Real time tracking by LOPF algorithm with mixture model

    NASA Astrophysics Data System (ADS)

    Meng, Bo; Zhu, Ming; Han, Guangliang; Wu, Zhiguo

    2007-11-01

    A new particle filter-the Local Optimum Particle Filter (LOPF) algorithm is presented for tracking object accurately and steadily in visual sequences in real time which is a challenge task in computer vision field. In order to using the particles efficiently, we first use Sobel algorithm to extract the profile of the object. Then, we employ a new Local Optimum algorithm to auto-initialize some certain number of particles from these edge points as centre of the particles. The main advantage we do this in stead of selecting particles randomly in conventional particle filter is that we can pay more attentions on these more important optimum candidates and reduce the unnecessary calculation on those negligible ones, in addition we can overcome the conventional degeneracy phenomenon in a way and decrease the computational costs. Otherwise, the threshold is a key factor that affecting the results very much. So here we adapt an adaptive threshold choosing method to get the optimal Sobel result. The dissimilarities between the target model and the target candidates are expressed by a metric derived from the Bhattacharyya coefficient. Here, we use both the counter cue to select the particles and the color cur to describe the targets as the mixture target model. The effectiveness of our scheme is demonstrated by real visual tracking experiments. Results from simulations and experiments with real video data show the improved performance of the proposed algorithm when compared with that of the standard particle filter. The superior performance is evident when the target encountering the occlusion in real video where the standard particle filter usually fails.

  6. [Study of inversion and classification of particle size distribution under dependent model algorithm].

    PubMed

    Sun, Xiao-Gang; Tang, Hong; Yuan, Gui-Bin

    2008-05-01

    For the total light scattering particle sizing technique, an inversion and classification method was proposed with the dependent model algorithm. The measured particle system was inversed simultaneously by different particle distribution functions whose mathematic model was known in advance, and then classified according to the inversion errors. The simulation experiments illustrated that it is feasible to use the inversion errors to determine the particle size distribution. The particle size distribution function was obtained accurately at only three wavelengths in the visible light range with the genetic algorithm, and the inversion results were steady and reliable, which decreased the number of multi wavelengths to the greatest extent and increased the selectivity of light source. The single peak distribution inversion error was less than 5% and the bimodal distribution inversion error was less than 10% when 5% stochastic noise was put in the transmission extinction measurement values at two wavelengths. The running time of this method was less than 2 s. The method has advantages of simplicity, rapidity, and suitability for on-line particle size measurement.

  7. A physics-motivated Centroidal Voronoi Particle domain decomposition method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de

    2017-04-15

    In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state ismore » developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.« less

  8. A physics-motivated Centroidal Voronoi Particle domain decomposition method

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-04-01

    In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.

  9. Biogeography-based particle swarm optimization with fuzzy elitism and its applications to constrained engineering problems

    NASA Astrophysics Data System (ADS)

    Guo, Weian; Li, Wuzhao; Zhang, Qun; Wang, Lei; Wu, Qidi; Ren, Hongliang

    2014-11-01

    In evolutionary algorithms, elites are crucial to maintain good features in solutions. However, too many elites can make the evolutionary process stagnate and cannot enhance the performance. This article employs particle swarm optimization (PSO) and biogeography-based optimization (BBO) to propose a hybrid algorithm termed biogeography-based particle swarm optimization (BPSO) which could make a large number of elites effective in searching optima. In this algorithm, the whole population is split into several subgroups; BBO is employed to search within each subgroup and PSO for the global search. Since not all the population is used in PSO, this structure overcomes the premature convergence in the original PSO. Time complexity analysis shows that the novel algorithm does not increase the time consumption. Fourteen numerical benchmarks and four engineering problems with constraints are used to test the BPSO. To better deal with constraints, a fuzzy strategy for the number of elites is investigated. The simulation results validate the feasibility and effectiveness of the proposed algorithm.

  10. Application of randomly oriented spheroids for retrieval of dust particle parameters from multiwavelength lidar measurements

    NASA Astrophysics Data System (ADS)

    Veselovskii, I.; Dubovik, O.; Kolgotin, A.; Lapyonok, T.; di Girolamo, P.; Summa, D.; Whiteman, D. N.; Mishchenko, M.; Tanré, D.

    2010-11-01

    Multiwavelength (MW) Raman lidars have demonstrated their potential to profile particle parameters; however, until now, the physical models used in retrieval algorithms for processing MW lidar data have been predominantly based on the Mie theory. This approach is applicable to the modeling of light scattering by spherically symmetric particles only and does not adequately reproduce the scattering by generally nonspherical desert dust particles. Here we present an algorithm based on a model of randomly oriented spheroids for the inversion of multiwavelength lidar data. The aerosols are modeled as a mixture of two aerosol components: one composed only of spherical and the second composed of nonspherical particles. The nonspherical component is an ensemble of randomly oriented spheroids with size-independent shape distribution. This approach has been integrated into an algorithm retrieving aerosol properties from the observations with a Raman lidar based on a tripled Nd:YAG laser. Such a lidar provides three backscattering coefficients, two extinction coefficients, and the particle depolarization ratio at a single or multiple wavelengths. Simulations were performed for a bimodal particle size distribution typical of desert dust particles. The uncertainty of the retrieved particle surface, volume concentration, and effective radius for 10% measurement errors is estimated to be below 30%. We show that if the effect of particle nonsphericity is not accounted for, the errors in the retrieved aerosol parameters increase notably. The algorithm was tested with experimental data from a Saharan dust outbreak episode, measured with the BASIL multiwavelength Raman lidar in August 2007. The vertical profiles of particle parameters as well as the particle size distributions at different heights were retrieved. It was shown that the algorithm developed provided substantially reasonable results consistent with the available independent information about the observed aerosol event.

  11. Binomial tau-leap spatial stochastic simulation algorithm for applications in chemical kinetics.

    PubMed

    Marquez-Lago, Tatiana T; Burrage, Kevin

    2007-09-14

    In cell biology, cell signaling pathway problems are often tackled with deterministic temporal models, well mixed stochastic simulators, and/or hybrid methods. But, in fact, three dimensional stochastic spatial modeling of reactions happening inside the cell is needed in order to fully understand these cell signaling pathways. This is because noise effects, low molecular concentrations, and spatial heterogeneity can all affect the cellular dynamics. However, there are ways in which important effects can be accounted without going to the extent of using highly resolved spatial simulators (such as single-particle software), hence reducing the overall computation time significantly. We present a new coarse grained modified version of the next subvolume method that allows the user to consider both diffusion and reaction events in relatively long simulation time spans as compared with the original method and other commonly used fully stochastic computational methods. Benchmarking of the simulation algorithm was performed through comparison with the next subvolume method and well mixed models (MATLAB), as well as stochastic particle reaction and transport simulations (CHEMCELL, Sandia National Laboratories). Additionally, we construct a model based on a set of chemical reactions in the epidermal growth factor receptor pathway. For this particular application and a bistable chemical system example, we analyze and outline the advantages of our presented binomial tau-leap spatial stochastic simulation algorithm, in terms of efficiency and accuracy, in scenarios of both molecular homogeneity and heterogeneity.

  12. Direct simulation Monte Carlo method for the Uehling-Uhlenbeck-Boltzmann equation.

    PubMed

    Garcia, Alejandro L; Wagner, Wolfgang

    2003-11-01

    In this paper we describe a direct simulation Monte Carlo algorithm for the Uehling-Uhlenbeck-Boltzmann equation in terms of Markov processes. This provides a unifying framework for both the classical Boltzmann case as well as the Fermi-Dirac and Bose-Einstein cases. We establish the foundation of the algorithm by demonstrating its link to the kinetic equation. By numerical experiments we study its sensitivity to the number of simulation particles and to the discretization of the velocity space, when approximating the steady-state distribution.

  13. A size-composition resolved aerosol model for simulating the dynamics of externally mixed particles: SCRAM (v 1.0)

    NASA Astrophysics Data System (ADS)

    Zhu, S.; Sartelet, K. N.; Seigneur, C.

    2015-06-01

    The Size-Composition Resolved Aerosol Model (SCRAM) for simulating the dynamics of externally mixed atmospheric particles is presented. This new model classifies aerosols by both composition and size, based on a comprehensive combination of all chemical species and their mass-fraction sections. All three main processes involved in aerosol dynamics (coagulation, condensation/evaporation and nucleation) are included. The model is first validated by comparison with a reference solution and with results of simulations using internally mixed particles. The degree of mixing of particles is investigated in a box model simulation using data representative of air pollution in Greater Paris. The relative influence on the mixing state of the different aerosol processes (condensation/evaporation, coagulation) and of the algorithm used to model condensation/evaporation (bulk equilibrium, dynamic) is studied.

  14. Cold dark matter. 1: The formation of dark halos

    NASA Technical Reports Server (NTRS)

    Gelb, James M.; Bertschinger, Edmund

    1994-01-01

    We use numerical simulations of critically closed cold dark matter (CDM) models to study the effects of numerical resolution on observable quantities. We study simulations with up to 256(exp 3) particles using the particle-mesh (PM) method and with up to 144(exp 3) particles using the adaptive particle-particle-mesh (P3M) method. Comparisons of galaxy halo distributions are made among the various simulations. We also compare distributions with observations, and we explore methods for identifying halos, including a new algorithm that finds all particles within closed contours of the smoothed density field surrounding a peak. The simulated halos show more substructure than predicted by the Press-Schechter theory. We are able to rule out all omega = 1 CDM models for linear amplitude sigma(sub 8) greater than or approximately = 0.5 because the simulations produce too many massive halos compared with the observations. The simulations also produce too many low-mass halos. The distribution of halos characterized by their circular velocities for the P3M simulations is in reasonable agreement with the observations for 150 km/s less than or = V(sub circ) less than or = 350 km/s.

  15. A Highly Parallelized Special-Purpose Computer for Many-Body Simulations with an Arbitrary Central Force: MD-GRAPE

    NASA Astrophysics Data System (ADS)

    Fukushige, Toshiyuki; Taiji, Makoto; Makino, Junichiro; Ebisuzaki, Toshikazu; Sugimoto, Daiichiro

    1996-09-01

    We have developed a parallel, pipelined special-purpose computer for N-body simulations, MD-GRAPE (for "GRAvity PipE"). In gravitational N- body simulations, almost all computing time is spent on the calculation of interactions between particles. GRAPE is specialized hardware to calculate these interactions. It is used with a general-purpose front-end computer that performs all calculations other than the force calculation. MD-GRAPE is the first parallel GRAPE that can calculate an arbitrary central force. A force different from a pure 1/r potential is necessary for N-body simulations with periodic boundary conditions using the Ewald or particle-particle/particle-mesh (P^3^M) method. MD-GRAPE accelerates the calculation of particle-particle force for these algorithms. An MD- GRAPE board has four MD chips and its peak performance is 4.2 GFLOPS. On an MD-GRAPE board, a cosmological N-body simulation takes 6O0(N/10^6^)^3/2^ s per step for the Ewald method, where N is the number of particles, and would take 24O(N/10^6^) s per step for the P^3^M method, in a uniform distribution of particles.

  16. Quantum Engineering of Dynamical Gauge Fields on Optical Lattices

    DTIC Science & Technology

    2016-07-08

    exact blocking formulas from the TRG formulation of the transfer matrix. The second is a worm algorithm. The particle number distributions obtained...a fact that can be explained by an approximate particle- hole symmetry. We have also developed a computer code suite for simulating the Abelian

  17. Stochastic Set-Based Particle Swarm Optimization Based on Local Exploration for Solving the Carpool Service Problem.

    PubMed

    Chou, Sheng-Kai; Jiau, Ming-Kai; Huang, Shih-Chia

    2016-08-01

    The growing ubiquity of vehicles has led to increased concerns about environmental issues. These concerns can be mitigated by implementing an effective carpool service. In an intelligent carpool system, an automated service process assists carpool participants in determining routes and matches. It is a discrete optimization problem that involves a system-wide condition as well as participants' expectations. In this paper, we solve the carpool service problem (CSP) to provide satisfactory ride matches. To this end, we developed a particle swarm carpool algorithm based on stochastic set-based particle swarm optimization (PSO). Our method introduces stochastic coding to augment traditional particles, and uses three terminologies to represent a particle: 1) particle position; 2) particle view; and 3) particle velocity. In this way, the set-based PSO (S-PSO) can be realized by local exploration. In the simulation and experiments, two kind of discrete PSOs-S-PSO and binary PSO (BPSO)-and a genetic algorithm (GA) are compared and examined using tested benchmarks that simulate a real-world metropolis. We observed that the S-PSO outperformed the BPSO and the GA thoroughly. Moreover, our method yielded the best result in a statistical test and successfully obtained numerical results for meeting the optimization objectives of the CSP.

  18. NVU dynamics. I. Geodesic motion on the constant-potential-energy hypersurface.

    PubMed

    Ingebrigtsen, Trond S; Toxvaerd, Søren; Heilmann, Ole J; Schrøder, Thomas B; Dyre, Jeppe C

    2011-09-14

    An algorithm is derived for computer simulation of geodesics on the constant-potential-energy hypersurface of a system of N classical particles. First, a basic time-reversible geodesic algorithm is derived by discretizing the geodesic stationarity condition and implementing the constant-potential-energy constraint via standard Lagrangian multipliers. The basic NVU algorithm is tested by single-precision computer simulations of the Lennard-Jones liquid. Excellent numerical stability is obtained if the force cutoff is smoothed and the two initial configurations have identical potential energy within machine precision. Nevertheless, just as for NVE algorithms, stabilizers are needed for very long runs in order to compensate for the accumulation of numerical errors that eventually lead to "entropic drift" of the potential energy towards higher values. A modification of the basic NVU algorithm is introduced that ensures potential-energy and step-length conservation; center-of-mass drift is also eliminated. Analytical arguments confirmed by simulations demonstrate that the modified NVU algorithm is absolutely stable. Finally, we present simulations showing that the NVU algorithm and the standard leap-frog NVE algorithm have identical radial distribution functions for the Lennard-Jones liquid. © 2011 American Institute of Physics

  19. Direct position determination for digital modulation signals based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Wan-Ting; Yu, Hong-yi; Du, Jian-Ping; Wang, Ding

    2018-04-01

    The Direct Position Determination (DPD) algorithm has been demonstrated to achieve a better accuracy with known signal waveforms. However, the signal waveform is difficult to be completely known in the actual positioning process. To solve the problem, we proposed a DPD method for digital modulation signals based on improved particle swarm optimization algorithm. First, a DPD model is established for known modulation signals and a cost function is obtained on symbol estimation. Second, as the optimization of the cost function is a nonlinear integer optimization problem, an improved Particle Swarm Optimization (PSO) algorithm is considered for the optimal symbol search. Simulations are carried out to show the higher position accuracy of the proposed DPD method and the convergence of the fitness function under different inertia weight and population size. On the one hand, the proposed algorithm can take full advantage of the signal feature to improve the positioning accuracy. On the other hand, the improved PSO algorithm can improve the efficiency of symbol search by nearly one hundred times to achieve a global optimal solution.

  20. Parallel implementation of the particle simulation method with dynamic load balancing: Toward realistic geodynamical simulation

    NASA Astrophysics Data System (ADS)

    Furuichi, M.; Nishiura, D.

    2015-12-01

    Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our approach is suitable for solving the particles with different calculation costs (e.g. boundary particles) as well as the heterogeneous computer architecture. We analyze the parallel efficiency and scalability on the super computer systems (K-computer, Earth simulator 3, etc.).

  1. A novel coupling of noise reduction algorithms for particle flow simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimoń, M.J., E-mail: malgorzata.zimon@stfc.ac.uk; James Weir Fluids Lab, Mechanical and Aerospace Engineering Department, The University of Strathclyde, Glasgow G1 1XJ; Reese, J.M.

    2016-09-15

    Proper orthogonal decomposition (POD) and its extension based on time-windows have been shown to greatly improve the effectiveness of recovering smooth ensemble solutions from noisy particle data. However, to successfully de-noise any molecular system, a large number of measurements still need to be provided. In order to achieve a better efficiency in processing time-dependent fields, we have combined POD with a well-established signal processing technique, wavelet-based thresholding. In this novel hybrid procedure, the wavelet filtering is applied within the POD domain and referred to as WAVinPOD. The algorithm exhibits promising results when applied to both synthetically generated signals and particlemore » data. In this work, the simulations compare the performance of our new approach with standard POD or wavelet analysis in extracting smooth profiles from noisy velocity and density fields. Numerical examples include molecular dynamics and dissipative particle dynamics simulations of unsteady force- and shear-driven liquid flows, as well as phase separation phenomenon. Simulation results confirm that WAVinPOD preserves the dimensionality reduction obtained using POD, while improving its filtering properties through the sparse representation of data in wavelet basis. This paper shows that WAVinPOD outperforms the other estimators for both synthetically generated signals and particle-based measurements, achieving a higher signal-to-noise ratio from a smaller number of samples. The new filtering methodology offers significant computational savings, particularly for multi-scale applications seeking to couple continuum informations with atomistic models. It is the first time that a rigorous analysis has compared de-noising techniques for particle-based fluid simulations.« less

  2. Short-time self-diffusion coefficient of a particle in a colloidal suspension bounded by a microchannel: Virial expansions and simulation

    NASA Astrophysics Data System (ADS)

    Kȩdzierski, Marcin; Wajnryb, Eligiusz

    2011-10-01

    Self-diffusion of colloidal particles confined to a cylindrical microchannel is considered theoretically and numerically. Virial expansion of the self-diffusion coefficient is performed. Two-body and three-body hydrodynamic interactions are evaluated with high precision using the multipole method. The multipole expansion algorithm is also used to perform numerical simulations of the self-diffusion coefficient, valid for all possible particle packing fractions. Comparison with earlier results shows that the widely used method of reflections is insufficient for calculations of hydrodynamic interactions even for small packing fractions and small particles radii, contrary to the prevalent opinion.

  3. Stochastic Simulation of Complex Fluid Flows

    DTIC Science & Technology

    The PI has developed novel numerical algorithms and computational codes to simulate the Brownian motion of rigidparticles immersed in a viscous fluid...processes and to the design of novel nanofluid materials. Therandom Brownian motion of particles in fluid can be accounted for in fluid-structure

  4. Enhanced quasi-static particle-in-cell simulation of electron cloud instabilities in circular accelerators

    NASA Astrophysics Data System (ADS)

    Feng, Bing

    Electron cloud instabilities have been observed in many circular accelerators around the world and raised concerns of future accelerators and possible upgrades. In this thesis, the electron cloud instabilities are studied with the quasi-static particle-in-cell (PIC) code QuickPIC. Modeling in three-dimensions the long timescale propagation of beam in electron clouds in circular accelerators requires faster and more efficient simulation codes. Thousands of processors are easily available for parallel computations. However, it is not straightforward to increase the effective speed of the simulation by running the same problem size on an increasingly number of processors because there is a limit to domain size in the decomposition of the two-dimensional part of the code. A pipelining algorithm applied on the fully parallelized particle-in-cell code QuickPIC is implemented to overcome this limit. The pipelining algorithm uses multiple groups of processors and optimizes the job allocation on the processors in parallel computing. With this novel algorithm, it is possible to use on the order of 102 processors, and to expand the scale and the speed of the simulation with QuickPIC by a similar factor. In addition to the efficiency improvement with the pipelining algorithm, the fidelity of QuickPIC is enhanced by adding two physics models, the beam space charge effect and the dispersion effect. Simulation of two specific circular machines is performed with the enhanced QuickPIC. First, the proposed upgrade to the Fermilab Main Injector is studied with an eye upon guiding the design of the upgrade and code validation. Moderate emittance growth is observed for the upgrade of increasing the bunch population by 5 times. But the simulation also shows that increasing the beam energy from 8GeV to 20GeV or above can effectively limit the emittance growth. Then the enhanced QuickPIC is used to simulate the electron cloud effect on electron beam in the Cornell Energy Recovery Linac (ERL) due to extremely small emittance and high peak currents anticipated in the machine. A tune shift is discovered from the simulation; however, emittance growth of the electron beam in electron cloud is not observed for ERL parameters.

  5. Development of a fully implicit particle-in-cell scheme for gyrokinetic electromagnetic turbulence simulation in XGC1

    NASA Astrophysics Data System (ADS)

    Ku, Seung-Hoe; Hager, R.; Chang, C. S.; Chacon, L.; Chen, G.; EPSI Team

    2016-10-01

    The cancelation problem has been a long-standing issue for long wavelengths modes in electromagnetic gyrokinetic PIC simulations in toroidal geometry. As an attempt of resolving this issue, we implemented a fully implicit time integration scheme in the full-f, gyrokinetic PIC code XGC1. The new scheme - based on the implicit Vlasov-Darwin PIC algorithm by G. Chen and L. Chacon - can potentially resolve cancelation problem. The time advance for the field and the particle equations is space-time-centered, with particle sub-cycling. The resulting system of equations is solved by a Picard iteration solver with fixed-point accelerator. The algorithm is implemented in the parallel velocity formalism instead of the canonical parallel momentum formalism. XGC1 specializes in simulating the tokamak edge plasma with magnetic separatrix geometry. A fully implicit scheme could be a way to accurate and efficient gyrokinetic simulations. We will test if this numerical scheme overcomes the cancelation problem, and reproduces the dispersion relation of Alfven waves and tearing modes in cylindrical geometry. Funded by US DOE FES and ASCR, and computing resources provided by OLCF through ALCC.

  6. Application of particle splitting method for both hydrostatic and hydrodynamic cases in SPH

    NASA Astrophysics Data System (ADS)

    Liu, W. T.; Sun, P. N.; Ming, F. R.; Zhang, A. M.

    2018-01-01

    Smoothed particle hydrodynamics (SPH) method with numerical diffusive terms shows satisfactory stability and accuracy in some violent fluid-solid interaction problems. However, in most simulations, uniform particle distributions are used and the multi-resolution, which can obviously improve the local accuracy and the overall computational efficiency, has seldom been applied. In this paper, a dynamic particle splitting method is applied and it allows for the simulation of both hydrostatic and hydrodynamic problems. The splitting algorithm is that, when a coarse (mother) particle enters the splitting region, it will be split into four daughter particles, which inherit the physical parameters of the mother particle. In the particle splitting process, conservations of mass, momentum and energy are ensured. Based on the error analysis, the splitting technique is designed to allow the optimal accuracy at the interface between the coarse and refined particles and this is particularly important in the simulation of hydrostatic cases. Finally, the scheme is validated by five basic cases, which demonstrate that the present SPH model with a particle splitting technique is of high accuracy and efficiency and is capable for the simulation of a wide range of hydrodynamic problems.

  7. Moving charged particles in lattice Boltzmann-based electrokinetics

    NASA Astrophysics Data System (ADS)

    Kuron, Michael; Rempfer, Georg; Schornbaum, Florian; Bauer, Martin; Godenschwager, Christian; Holm, Christian; de Graaf, Joost

    2016-12-01

    The motion of ionic solutes and charged particles under the influence of an electric field and the ensuing hydrodynamic flow of the underlying solvent is ubiquitous in aqueous colloidal suspensions. The physics of such systems is described by a coupled set of differential equations, along with boundary conditions, collectively referred to as the electrokinetic equations. Capuani et al. [J. Chem. Phys. 121, 973 (2004)] introduced a lattice-based method for solving this system of equations, which builds upon the lattice Boltzmann algorithm for the simulation of hydrodynamic flow and exploits computational locality. However, thus far, a description of how to incorporate moving boundary conditions into the Capuani scheme has been lacking. Moving boundary conditions are needed to simulate multiple arbitrarily moving colloids. In this paper, we detail how to introduce such a particle coupling scheme, based on an analogue to the moving boundary method for the pure lattice Boltzmann solver. The key ingredients in our method are mass and charge conservation for the solute species and a partial-volume smoothing of the solute fluxes to minimize discretization artifacts. We demonstrate our algorithm's effectiveness by simulating the electrophoresis of charged spheres in an external field; for a single sphere we compare to the equivalent electro-osmotic (co-moving) problem. Our method's efficiency and ease of implementation should prove beneficial to future simulations of the dynamics in a wide range of complex nanoscopic and colloidal systems that were previously inaccessible to lattice-based continuum algorithms.

  8. Iterative Track Fitting Using Cluster Classification in Multi Wire Proportional Chamber

    NASA Astrophysics Data System (ADS)

    Primor, David; Mikenberg, Giora; Etzion, Erez; Messer, Hagit

    2007-10-01

    This paper addresses the problem of track fitting of a charged particle in a multi wire proportional chamber (MWPC) using cathode readout strips. When a charged particle crosses a MWPC, a positive charge is induced on a cluster of adjacent strips. In the presence of high radiation background, the cluster charge measurements may be contaminated due to background particles, leading to less accurate hit position estimation. The least squares method for track fitting assumes the same position error distribution for all hits and thus loses its optimal properties on contaminated data. For this reason, a new robust algorithm is proposed. The algorithm first uses the known spatial charge distribution caused by a single charged particle over the strips, and classifies the clusters into ldquocleanrdquo and ldquodirtyrdquo clusters. Then, using the classification results, it performs an iterative weighted least squares fitting procedure, updating its optimal weights each iteration. The performance of the suggested algorithm is compared to other track fitting techniques using a simulation of tracks with radiation background. It is shown that the algorithm improves the track fitting performance significantly. A practical implementation of the algorithm is presented for muon track fitting in the cathode strip chamber (CSC) of the ATLAS experiment.

  9. Time-optimal trajectory planning for underactuated spacecraft using a hybrid particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhuang, Yufei; Huang, Haibin

    2014-02-01

    A hybrid algorithm combining particle swarm optimization (PSO) algorithm with the Legendre pseudospectral method (LPM) is proposed for solving time-optimal trajectory planning problem of underactuated spacecrafts. At the beginning phase of the searching process, an initialization generator is constructed by the PSO algorithm due to its strong global searching ability and robustness to random initial values, however, PSO algorithm has a disadvantage that its convergence rate around the global optimum is slow. Then, when the change in fitness function is smaller than a predefined value, the searching algorithm is switched to the LPM to accelerate the searching process. Thus, with the obtained solutions by the PSO algorithm as a set of proper initial guesses, the hybrid algorithm can find a global optimum more quickly and accurately. 200 Monte Carlo simulations results demonstrate that the proposed hybrid PSO-LPM algorithm has greater advantages in terms of global searching capability and convergence rate than both single PSO algorithm and LPM algorithm. Moreover, the PSO-LPM algorithm is also robust to random initial values.

  10. IViPP: A Tool for Visualization in Particle Physics

    NASA Astrophysics Data System (ADS)

    Tran, Hieu; Skiba, Elizabeth; Baldwin, Doug

    2011-10-01

    Experiments and simulations in physics generate a lot of data; visualization is helpful to prepare that data for analysis. IViPP (Interactive Visualizations in Particle Physics) is an interactive computer program that visualizes results of particle physics simulations or experiments. IViPP can handle data from different simulators, such as SRIM or MCNP. It can display relevant geometry and measured scalar data; it can do simple selection from the visualized data. In order to be an effective visualization tool, IViPP must have a software architecture that can flexibly adapt to new data sources and display styles. It must be able to display complicated geometry and measured data with a high dynamic range. We therefore organize it in a highly modular structure, we develop libraries to describe geometry algorithmically, use rendering algorithms running on the powerful GPU to display 3-D geometry at interactive rates, and we represent scalar values in a visual form of scientific notation that shows both mantissa and exponent. This work was supported in part by the US Department of Energy through the Laboratory for Laser Energetics (LLE), with special thanks to Craig Sangster at LLE.

  11. Zonal methods for the parallel execution of range-limited N-body simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowers, Kevin J.; Dror, Ron O.; Shaw, David E.

    2007-01-20

    Particle simulations in fields ranging from biochemistry to astrophysics require the evaluation of interactions between all pairs of particles separated by less than some fixed interaction radius. The applicability of such simulations is often limited by the time required for calculation, but the use of massive parallelism to accelerate these computations is typically limited by inter-processor communication requirements. Recently, Snir [M. Snir, A note on N-body computations with cutoffs, Theor. Comput. Syst. 37 (2004) 295-318] and Shaw [D.E. Shaw, A fast, scalable method for the parallel evaluation of distance-limited pairwise particle interactions, J. Comput. Chem. 26 (2005) 1318-1328] independently introducedmore » two distinct methods that offer asymptotic reductions in the amount of data transferred between processors. In the present paper, we show that these schemes represent special cases of a more general class of methods, and introduce several new algorithms in this class that offer practical advantages over all previously described methods for a wide range of problem parameters. We also show that several of these algorithms approach an approximate lower bound on inter-processor data transfer.« less

  12. Feed-Forward Neural Network Soft-Sensor Modeling of Flotation Process Based on Particle Swarm Optimization and Gravitational Search Algorithm

    PubMed Central

    Wang, Jie-Sheng; Han, Shuang

    2015-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, a feed-forward neural network (FNN) based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO) algorithm and gravitational search algorithm (GSA) is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:26583034

  13. Electromagnetic gyrokinetic simulation in GTS

    NASA Astrophysics Data System (ADS)

    Ma, Chenhao; Wang, Weixing; Startsev, Edward; Lee, W. W.; Ethier, Stephane

    2017-10-01

    We report the recent development in the electromagnetic simulations for general toroidal geometry based on the particle-in-cell gyrokinetic code GTS. Because of the cancellation problem, the EM gyrokinetic simulation has numerical difficulties in the MHD limit where k⊥ρi -> 0 and/or β >me /mi . Recently several approaches has been developed to circumvent this problem: (1) p∥ formulation with analytical skin term iteratively approximated by simulation particles (Yang Chen), (2) A modified p∥ formulation with ∫ dtE∥ used in place of A∥ (Mishichenko); (3) A conservative theme where the electron density perturbation for the Poisson equation is calculated from an electron continuity equation (Bao) ; (4) double-split-weight scheme with two weights, one for Poisson equation and one for time derivative of Ampere's law, each with different splits designed to remove large terms from Vlasov equation (Startsev). These algorithms are being implemented into GTS framework for general toroidal geometry. The performance of these different algorithms will be compared for various EM modes.

  14. Simulation of deterministic energy-balance particle agglomeration in turbulent liquid-solid flows

    NASA Astrophysics Data System (ADS)

    Njobuenwu, Derrick O.; Fairweather, Michael

    2017-08-01

    An efficient technique to simulate turbulent particle-laden flow at high mass loadings within the four-way coupled simulation regime is presented. The technique implements large-eddy simulation, discrete particle simulation, a deterministic treatment of inter-particle collisions, and an energy-balanced particle agglomeration model. The algorithm to detect inter-particle collisions is such that the computational costs scale linearly with the number of particles present in the computational domain. On detection of a collision, particle agglomeration is tested based on the pre-collision kinetic energy, restitution coefficient, and van der Waals' interactions. The performance of the technique developed is tested by performing parametric studies on the influence of the restitution coefficient (en = 0.2, 0.4, 0.6, and 0.8), particle size (dp = 60, 120, 200, and 316 μm), Reynolds number (Reτ = 150, 300, and 590), and particle concentration (αp = 5.0 × 10-4, 1.0 × 10-3, and 5.0 × 10-3) on particle-particle interaction events (collision and agglomeration). The results demonstrate that the collision frequency shows a linear dependency on the restitution coefficient, while the agglomeration rate shows an inverse dependence. Collisions among smaller particles are more frequent and efficient in forming agglomerates than those of coarser particles. The particle-particle interaction events show a strong dependency on the shear Reynolds number Reτ, while increasing the particle concentration effectively enhances particle collision and agglomeration whilst having only a minor influence on the agglomeration rate. Overall, the sensitivity of the particle-particle interaction events to the selected simulation parameters is found to influence the population and distribution of the primary particles and agglomerates formed.

  15. Modeling the arrangement of particles in natural swelling-clay porous media using three-dimensional packing of elliptic disks

    NASA Astrophysics Data System (ADS)

    Ferrage, Eric; Hubert, Fabien; Tertre, Emmanuel; Delville, Alfred; Michot, Laurent J.; Levitz, Pierre

    2015-06-01

    Swelling clay minerals play a key role in the control of water and pollutant migration in natural media such as soils. Moreover, swelling clay particles' orientational properties in porous media have significant implications for the directional dependence of fluid transfer. Herein we investigate the ability to mimic the organization of particles in natural swelling-clay porous media using a three-dimensional sequential particle deposition procedure [D. Coelho, J.-F. Thovert, and P. M. Adler, Phys. Rev. E 55, 1959 (1997), 10.1103/PhysRevE.55.1959]. The algorithm considered is first used to simulate disk packings. Porosities of disk packings fall onto a single master curve when plotted against the orientational scalar order parameter value. This relation is used to validate the algorithm used in comparison with existing ones. The ellipticity degree of the particles is shown to have a negligible effect on the packing porosity for ratios ℓa/ℓb less than 1.5, whereas a significant increase in porosity is obtained for higher values. The effect of the distribution of the geometrical parameters (size, aspect ratio, and ellipticity degree) of particles on the final packing properties is also investigated. Finally, the algorithm is used to simulate particle packings for three size fractions of natural swelling-clay mineral powders. Calculated data regarding the distribution of the geometrical parameters and orientation of particles in porous media are successfully compared with experimental data obtained for the same samples. The results indicate that the obtained virtual porous media can be considered representative of natural samples and can be used to extract properties difficult to obtain experimentally, such as the anisotropic features of pore and solid phases in a system.

  16. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE PAGES

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; ...

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  17. Stochastic optimization of GeantV code by use of genetic algorithms

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.

  18. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  19. Faster and More Accurate Transport Procedures for HZETRN

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Blattnig, Steve R.; Badavi, Francis F.

    2010-01-01

    Several aspects of code verification are examined for HZETRN. First, a detailed derivation of the numerical marching algorithms is given. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of various coding errors is also given, and the impact of these errors on exposure quantities is shown. Finally, a coupled convergence study is conducted. From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is also determined that almost all of the discretization error in HZETRN is caused by charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons are given for three applications in which HZETRN is commonly used. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.

  20. Numerical simulation of disperse particle flows on a graphics processing unit

    NASA Astrophysics Data System (ADS)

    Sierakowski, Adam J.

    In both nature and technology, we commonly encounter solid particles being carried within fluid flows, from dust storms to sediment erosion and from food processing to energy generation. The motion of uncountably many particles in highly dynamic flow environments characterizes the tremendous complexity of such phenomena. While methods exist for the full-scale numerical simulation of such systems, current computational capabilities require the simplification of the numerical task with significant approximation using closure models widely recognized as insufficient. There is therefore a fundamental need for the investigation of the underlying physical processes governing these disperse particle flows. In the present work, we develop a new tool based on the Physalis method for the first-principles numerical simulation of thousands of particles (a small fraction of an entire disperse particle flow system) in order to assist in the search for new reduced-order closure models. We discuss numerous enhancements to the efficiency and stability of the Physalis method, which introduces the influence of spherical particles to a fixed-grid incompressible Navier-Stokes flow solver using a local analytic solution to the flow equations. Our first-principles investigation demands the modeling of unresolved length and time scales associated with particle collisions. We introduce a collision model alongside Physalis, incorporating lubrication effects and proposing a new nonlinearly damped Hertzian contact model. By reproducing experimental studies from the literature, we document extensive validation of the methods. We discuss the implementation of our methods for massively parallel computation using a graphics processing unit (GPU). We combine Eulerian grid-based algorithms with Lagrangian particle-based algorithms to achieve computational throughput up to 90 times faster than the legacy implementation of Physalis for a single central processing unit. By avoiding all data communication between the GPU and the host system during the simulation, we utilize with great efficacy the GPU hardware with which many high performance computing systems are currently equipped. We conclude by looking forward to the future of Physalis with multi-GPU parallelization in order to perform resolved disperse flow simulations of more than 100,000 particles and further advance the development of reduced-order closure models.

  1. 3D Staggered-Grid Finite-Difference Simulation of Acoustic Waves in Turbulent Moving Media

    NASA Astrophysics Data System (ADS)

    Symons, N. P.; Aldridge, D. F.; Marlin, D.; Wilson, D. K.; Sullivan, P.; Ostashev, V.

    2003-12-01

    Acoustic wave propagation in a three-dimensional heterogeneous moving atmosphere is accurately simulated with a numerical algorithm recently developed under the DOD Common High Performance Computing Software Support Initiative (CHSSI). Sound waves within such a dynamic environment are mathematically described by a set of four, coupled, first-order partial differential equations governing small-amplitude fluctuations in pressure and particle velocity. The system is rigorously derived from fundamental principles of continuum mechanics, ideal-fluid constitutive relations, and reasonable assumptions that the ambient atmospheric motion is adiabatic and divergence-free. An explicit, time-domain, finite-difference (FD) numerical scheme is used to solve the system for both pressure and particle velocity wavefields. The atmosphere is characterized by 3D gridded models of sound speed, mass density, and the three components of the wind velocity vector. Dependent variables are stored on staggered spatial and temporal grids, and centered FD operators possess 2nd-order and 4th-order space/time accuracy. Accurate sound wave simulation is achieved provided grid intervals are chosen appropriately. The gridding must be fine enough to reduce numerical dispersion artifacts to an acceptable level and maintain stability. The algorithm is designed to execute on parallel computational platforms by utilizing a spatial domain-decomposition strategy. Currently, the algorithm has been validated on four different computational platforms, and parallel scalability of approximately 85% has been demonstrated. Comparisons with analytic solutions for uniform and vertically stratified wind models indicate that the FD algorithm generates accurate results with either a vanishing pressure or vanishing vertical-particle velocity boundary condition. Simulations are performed using a kinematic turbulence wind profile developed with the quasi-wavelet method. In addition, preliminary results are presented using high-resolution 3D dynamic turbulent flowfields generated by a large-eddy simulation model of a stably stratified planetary boundary layer. Sandia National Laboratories is a operated by Sandia Corporation, a Lockheed Martin Company, for the USDOE under contract 94-AL85000.

  2. A multi-parametric particle-pairing algorithm for particle tracking in single and multiphase flows

    NASA Astrophysics Data System (ADS)

    Cardwell, Nicholas D.; Vlachos, Pavlos P.; Thole, Karen A.

    2011-10-01

    Multiphase flows (MPFs) offer a rich area of fundamental study with many practical applications. Examples of such flows range from the ingestion of foreign particulates in gas turbines to transport of particles within the human body. Experimental investigation of MPFs, however, is challenging, and requires techniques that simultaneously resolve both the carrier and discrete phases present in the flowfield. This paper presents a new multi-parametric particle-pairing algorithm for particle tracking velocimetry (MP3-PTV) in MPFs. MP3-PTV improves upon previous particle tracking algorithms by employing a novel variable pair-matching algorithm which utilizes displacement preconditioning in combination with estimated particle size and intensity to more effectively and accurately match particle pairs between successive images. To improve the method's efficiency, a new particle identification and segmentation routine was also developed. Validation of the new method was initially performed on two artificial data sets: a traditional single-phase flow published by the Visualization Society of Japan (VSJ) and an in-house generated MPF data set having a bi-modal distribution of particles diameters. Metrics of the measurement yield, reliability and overall tracking efficiency were used for method comparison. On the VSJ data set, the newly presented segmentation routine delivered a twofold improvement in identifying particles when compared to other published methods. For the simulated MPF data set, measurement efficiency of the carrier phases improved from 9% to 41% for MP3-PTV as compared to a traditional hybrid PTV. When employed on experimental data of a gas-solid flow, the MP3-PTV effectively identified the two particle populations and reported a vector efficiency and velocity measurement error comparable to measurements for the single-phase flow images. Simultaneous measurement of the dispersed particle and the carrier flowfield velocities allowed for the calculation of instantaneous particle slip velocities, illustrating the algorithm's strength to robustly and accurately resolve polydispersed MPFs.

  3. Navier-Stokes simulation with constraint forces: finite-difference method for particle-laden flows and complex geometries.

    PubMed

    Höfler, K; Schwarzer, S

    2000-06-01

    Building on an idea of Fogelson and Peskin [J. Comput. Phys. 79, 50 (1988)] we describe the implementation and verification of a simulation technique for systems of non-Brownian particles in fluids at Reynolds numbers up to about 20 on the particle scale. This direct simulation technique fills a gap between simulations in the viscous regime and high-Reynolds-number modeling. It also combines sufficient computational accuracy with numerical efficiency and allows studies of several thousand, in principle arbitrarily shaped, extended and hydrodynamically interacting particles on regular work stations. We verify the algorithm in two and three dimensions for (i) single falling particles and (ii) a fluid flowing through a bed of fixed spheres. In the context of sedimentation we compute the volume fraction dependence of the mean sedimentation velocity. The results are compared with experimental and other numerical results both in the viscous and inertial regime and we find very satisfactory agreement.

  4. All-Particle Multiscale Computation of Hypersonic Rarefied Flow

    NASA Astrophysics Data System (ADS)

    Jun, E.; Burt, J. M.; Boyd, I. D.

    2011-05-01

    This study examines a new hybrid particle scheme used as an alternative means of multiscale flow simulation. The hybrid particle scheme employs the direct simulation Monte Carlo (DSMC) method in rarefied flow regions and the low diffusion (LD) particle method in continuum flow regions. The numerical procedures of the low diffusion particle method are implemented within an existing DSMC algorithm. The performance of the LD-DSMC approach is assessed by studying Mach 10 nitrogen flow over a sphere with a global Knudsen number of 0.002. The hybrid scheme results show good overall agreement with results from standard DSMC and CFD computation. Subcell procedures are utilized to improve computational efficiency and reduce sensitivity to DSMC cell size in the hybrid scheme. This makes it possible to perform the LD-DSMC simulation on a much coarser mesh that leads to a significant reduction in computation time.

  5. Discrete Spin Vector Approach for Monte Carlo-based Magnetic Nanoparticle Simulations

    NASA Astrophysics Data System (ADS)

    Senkov, Alexander; Peralta, Juan; Sahay, Rahul

    The study of magnetic nanoparticles has gained significant popularity due to the potential uses in many fields such as modern medicine, electronics, and engineering. To study the magnetic behavior of these particles in depth, it is important to be able to model and simulate their magnetic properties efficiently. Here we utilize the Metropolis-Hastings algorithm with a discrete spin vector model (in contrast to the standard continuous model) to model the magnetic hysteresis of a set of protected pure iron nanoparticles. We compare our simulations with the experimental hysteresis curves and discuss the efficiency of our algorithm.

  6. A novel Monte Carlo algorithm for simulating crystals with McStas

    NASA Astrophysics Data System (ADS)

    Alianelli, L.; Sánchez del Río, M.; Felici, R.; Andersen, K. H.; Farhi, E.

    2004-07-01

    We developed an original Monte Carlo algorithm for the simulation of Bragg diffraction by mosaic, bent and gradient crystals. It has practical applications, as it can be used for simulating imperfect crystals (monochromators, analyzers and perhaps samples) in neutron ray-tracing packages, like McStas. The code we describe here provides a detailed description of the particle interaction with the microscopic homogeneous regions composing the crystal, therefore it can be used also for the calculation of quantities having a conceptual interest, as multiple scattering, or for the interpretation of experiments aiming at characterizing crystals, like diffraction topographs.

  7. A Monte Carlo method for the simulation of coagulation and nucleation based on weighted particles and the concepts of stochastic resolution and merging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.

    Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope ofmore » a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.« less

  8. Implementation and Characterization of Three-Dimensional Particle-in-Cell Codes on Multiple-Instruction-Multiple-Data Massively Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Lyster, P. M.; Liewer, P. C.; Decyk, V. K.; Ferraro, R. D.

    1995-01-01

    A three-dimensional electrostatic particle-in-cell (PIC) plasma simulation code has been developed on coarse-grain distributed-memory massively parallel computers with message passing communications. Our implementation is the generalization to three-dimensions of the general concurrent particle-in-cell (GCPIC) algorithm. In the GCPIC algorithm, the particle computation is divided among the processors using a domain decomposition of the simulation domain. In a three-dimensional simulation, the domain can be partitioned into one-, two-, or three-dimensional subdomains ("slabs," "rods," or "cubes") and we investigate the efficiency of the parallel implementation of the push for all three choices. The present implementation runs on the Intel Touchstone Delta machine at Caltech; a multiple-instruction-multiple-data (MIMD) parallel computer with 512 nodes. We find that the parallel efficiency of the push is very high, with the ratio of communication to computation time in the range 0.3%-10.0%. The highest efficiency (> 99%) occurs for a large, scaled problem with 64(sup 3) particles per processing node (approximately 134 million particles of 512 nodes) which has a push time of about 250 ns per particle per time step. We have also developed expressions for the timing of the code which are a function of both code parameters (number of grid points, particles, etc.) and machine-dependent parameters (effective FLOP rate, and the effective interprocessor bandwidths for the communication of particles and grid points). These expressions can be used to estimate the performance of scaled problems--including those with inhomogeneous plasmas--to other parallel machines once the machine-dependent parameters are known.

  9. Improvements to the ShipIR/NTCS adaptive track gate algorithm and 3D flare particle model

    NASA Astrophysics Data System (ADS)

    Ramaswamy, Srinivasan; Vaitekunas, David A.; Gunter, Willem H.; February, Faith J.

    2017-05-01

    A key component in any image-based tracking system is the adaptive tracking algorithm used to segment the image into potential targets, rank-and-select the best candidate target, and gate the selected target to further improve tracker performance. Similarly, a key component in any soft-kill response to an incoming guided missile is the flare/chaff decoy used to distract or seduce the seeker homing system away from the naval platform. This paper describes the recent improvements to the naval threat countermeasure simulator (NTCS) of the NATO-standard ship signature model (ShipIR). Efforts to analyse and match the 3D flare particle model against actual IR measurements of the Chemring TALOS IR round resulted in further refinement of the 3D flare particle distribution. The changes in the flare model characteristics were significant enough to require an overhaul to the adaptive track gate (ATG) algorithm in the way it detects the presence of flare decoys and reacquires the target after flare separation. A series of test scenarios are used to demonstrate the impact of the new flare and ATG on IR tactics simulation.

  10. The differential algebra based multiple level fast multipole algorithm for 3D space charge field calculation and photoemission simulation

    DOE PAGES

    None, None

    2015-09-28

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics.more » In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.« less

  11. Implementation and comparative analysis of the optimisations produced by evolutionary algorithms for the parameter extraction of PSP MOSFET model

    NASA Astrophysics Data System (ADS)

    Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.

    2016-05-01

    The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.

  12. Vectorization of a particle code used in the simulation of rarefied hypersonic flow

    NASA Technical Reports Server (NTRS)

    Baganoff, D.

    1990-01-01

    A limitation of the direct simulation Monte Carlo (DSMC) method is that it does not allow efficient use of vector architectures that predominate in current supercomputers. Consequently, the problems that can be handled are limited to those of one- and two-dimensional flows. This work focuses on a reformulation of the DSMC method with the objective of designing a procedure that is optimized to the vector architectures found on machines such as the Cray-2. In addition, it focuses on finding a better balance between algorithmic complexity and the total number of particles employed in a simulation so that the overall performance of a particle simulation scheme can be greatly improved. Simulations of the flow about a 3D blunt body are performed with 10 to the 7th particles and 4 x 10 to the 5th mesh cells. Good statistics are obtained with time averaging over 800 time steps using 4.5 h of Cray-2 single-processor CPU time.

  13. Multiple R&D projects scheduling optimization with improved particle swarm algorithm.

    PubMed

    Liu, Mengqi; Shan, Miyuan; Wu, Juan

    2014-01-01

    For most enterprises, in order to win the initiative in the fierce competition of market, a key step is to improve their R&D ability to meet the various demands of customers more timely and less costly. This paper discusses the features of multiple R&D environments in large make-to-order enterprises under constrained human resource and budget, and puts forward a multi-project scheduling model during a certain period. Furthermore, we make some improvements to existed particle swarm algorithm and apply the one developed here to the resource-constrained multi-project scheduling model for a simulation experiment. Simultaneously, the feasibility of model and the validity of algorithm are proved in the experiment.

  14. Fast algorithms for chiral fermions in 2 dimensions

    NASA Astrophysics Data System (ADS)

    Hyka (Xhako), Dafina; Osmanaj (Zeqirllari), Rudina

    2018-03-01

    In lattice QCD simulations the formulation of the theory in lattice should be chiral in order that symmetry breaking happens dynamically from interactions. In order to guarantee this symmetry on the lattice one uses overlap and domain wall fermions. On the other hand high computational cost of lattice QCD simulations with overlap or domain wall fermions remains a major obstacle of research in the field of elementary particles. We have developed the preconditioned GMRESR algorithm as fast inverting algorithm for chiral fermions in U(1) lattice gauge theory. In this algorithm we used the geometric multigrid idea along the extra dimension.The main result of this work is that the preconditioned GMRESR is capable to accelerate the convergence 2 to 12 times faster than the other optimal algorithms (SHUMR) for different coupling constant and lattice 32x32. Also, in this paper we tested it for larger lattice size 64x64. From the results of simulations we can see that our algorithm is faster than SHUMR. This is a very promising result that this algorithm can be adapted also in 4 dimension.

  15. Acceleration of the Particle Swarm Optimization for Peierls–Nabarro modeling of dislocations in conventional and high-entropy alloys

    DOE PAGES

    Pei, Zongrui; Max-Planck-Inst. fur Eisenforschung, Duseldorf; Eisenbach, Markus

    2017-02-06

    Dislocations are among the most important defects in determining the mechanical properties of both conventional alloys and high-entropy alloys. The Peierls-Nabarro model supplies an efficient pathway to their geometries and mobility. The difficulty in solving the integro-differential Peierls-Nabarro equation is how to effectively avoid the local minima in the energy landscape of a dislocation core. Among the other methods to optimize the dislocation core structures, we choose the algorithm of Particle Swarm Optimization, an algorithm that simulates the social behaviors of organisms. By employing more particles (bigger swarm) and more iterative steps (allowing them to explore for longer time), themore » local minima can be effectively avoided. But this would require more computational cost. The advantage of this algorithm is that it is readily parallelized in modern high computing architecture. We demonstrate the performance of our parallelized algorithm scales linearly with the number of employed cores.« less

  16. Multi-phase SPH modelling of violent hydrodynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Mokos, Athanasios; Rogers, Benedict D.; Stansby, Peter K.; Domínguez, José M.

    2015-11-01

    This paper presents the acceleration of multi-phase smoothed particle hydrodynamics (SPH) using a graphics processing unit (GPU) enabling large numbers of particles (10-20 million) to be simulated on just a single GPU card. With novel hardware architectures such as a GPU, the optimum approach to implement a multi-phase scheme presents some new challenges. Many more particles must be included in the calculation and there are very different speeds of sound in each phase with the largest speed of sound determining the time step. This requires efficient computation. To take full advantage of the hardware acceleration provided by a single GPU for a multi-phase simulation, four different algorithms are investigated: conditional statements, binary operators, separate particle lists and an intermediate global function. Runtime results show that the optimum approach needs to employ separate cell and neighbour lists for each phase. The profiler shows that this approach leads to a reduction in both memory transactions and arithmetic operations giving significant runtime gains. The four different algorithms are compared to the efficiency of the optimised single-phase GPU code, DualSPHysics, for 2-D and 3-D simulations which indicate that the multi-phase functionality has a significant computational overhead. A comparison with an optimised CPU code shows a speed up of an order of magnitude over an OpenMP simulation with 8 threads and two orders of magnitude over a single thread simulation. A demonstration of the multi-phase SPH GPU code is provided by a 3-D dam break case impacting an obstacle. This shows better agreement with experimental results than an equivalent single-phase code. The multi-phase GPU code enables a convergence study to be undertaken on a single GPU with a large number of particles that otherwise would have required large high performance computing resources.

  17. A ridge tracking algorithm and error estimate for efficient computation of Lagrangian coherent structures.

    PubMed

    Lipinski, Doug; Mohseni, Kamran

    2010-03-01

    A ridge tracking algorithm for the computation and extraction of Lagrangian coherent structures (LCS) is developed. This algorithm takes advantage of the spatial coherence of LCS by tracking the ridges which form LCS to avoid unnecessary computations away from the ridges. We also make use of the temporal coherence of LCS by approximating the time dependent motion of the LCS with passive tracer particles. To justify this approximation, we provide an estimate of the difference between the motion of the LCS and that of tracer particles which begin on the LCS. In addition to the speedup in computational time, the ridge tracking algorithm uses less memory and results in smaller output files than the standard LCS algorithm. Finally, we apply our ridge tracking algorithm to two test cases, an analytically defined double gyre as well as the more complicated example of the numerical simulation of a swimming jellyfish. In our test cases, we find up to a 35 times speedup when compared with the standard LCS algorithm.

  18. Simulation of Powder Layer Deposition in Additive Manufacturing Processes Using the Discrete Element Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herbold, E. B.; Walton, O.; Homel, M. A.

    2015-10-26

    This document serves as a final report to a small effort where several improvements were added to a LLNL code GEODYN-­L to develop Discrete Element Method (DEM) algorithms coupled to Lagrangian Finite Element (FE) solvers to investigate powder-­bed formation problems for additive manufacturing. The results from these simulations will be assessed for inclusion as the initial conditions for Direct Metal Laser Sintering (DMLS) simulations performed with ALE3D. The algorithms were written and performed on parallel computing platforms at LLNL. The total funding level was 3-­4 weeks of an FTE split amongst two staff scientists and one post-­doc. The DEM simulationsmore » emulated, as much as was feasible, the physical process of depositing a new layer of powder over a bed of existing powder. The DEM simulations utilized truncated size distributions spanning realistic size ranges with a size distribution profile consistent with realistic sample set. A minimum simulation sample size on the order of 40-­particles square by 10-­particles deep was utilized in these scoping studies in order to evaluate the potential effects of size segregation variation with distance displaced in front of a screed blade. A reasonable method for evaluating the problem was developed and validated. Several simulations were performed to show the viability of the approach. Future investigations will focus on running various simulations investigating powder particle sizing and screen geometries.« less

  19. Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms

    NASA Technical Reports Server (NTRS)

    Kurdila, Andrew J.; Sharpley, Robert C.

    1999-01-01

    This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.

  20. Particle Swarm Optimization Toolbox

    NASA Technical Reports Server (NTRS)

    Grant, Michael J.

    2010-01-01

    The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry trajectory and guidance design for the Mars Science Laboratory mission but may be applied to any optimization problem.

  1. Advanced Techniques for Simulating the Behavior of Sand

    NASA Astrophysics Data System (ADS)

    Clothier, M.; Bailey, M.

    2009-12-01

    Computer graphics and visualization techniques continue to provide untapped research opportunities, particularly when working with earth science disciplines. Through collaboration with the Oregon Space Grant and IGERT Ecosystem Informatics programs we are developing new techniques for simulating sand. In addition, through collaboration with the Oregon Space Grant, we’ve been communicating with the Jet Propulsion Laboratory (JPL) to exchange ideas and gain feedback on our work. More specifically, JPL’s DARTS Laboratory specializes in planetary vehicle simulation, such as the Mars rovers. This simulation utilizes a virtual "sand box" to test how planetary rovers respond to different terrains while traversing them. Unfortunately, this simulation is unable to fully mimic the harsh, sandy environments of those found on Mars. Ideally, these simulations should allow a rover to interact with the sand beneath it, particularly for different sand granularities and densities. In particular, there may be situations where a rover may become stuck in sand due to lack of friction between the sand and wheels. In fact, in May 2009, the Spirit rover became stuck in the Martian sand and has provided additional motivation for this research. In order to develop a new sand simulation model, high performance computing will play a very important role in this work. More specifically, graphics processing units (GPUs) are useful due to their ability to run general purpose algorithms and ability to perform massively parallel computations. In prior research, simulating vast quantities of sand has been difficult to compute in real-time due to the computational complexity of many colliding particles. With the use of GPUs however, each particle collision will be parallelized, allowing for a dramatic performance increase. In addition, spatial partitioning will also provide a speed boost as this will help limit the number of particle collision calculations. However, since the goal of this research is to simulate the look and behavior of sand, this work will go beyond simple particle collision. In particular, we can continue to use our parallel algorithms not only on single particles but on particle “clumps” that consist of multiple combined particles. Since sand is typically not spherical in nature, these particle “clumps” help to simulate the coarse nature of sand. In a simulation environment, multiple combined particles could be used to simulate the polygonal and granular nature of sand grains. Thus, a diversity of sand particles can be generated. The interaction between these particles can then be parallelized using GPU hardware. As such, this research will investigate different graphics and physics techniques and determine the tradeoffs in performance and visual quality for sand simulation. An enhanced sand model through the use of high performance computing and GPUs has great potential to impact research for both earth and space scientists. Interaction with JPL has provided an opportunity for us to refine our simulation techniques that can ultimately be used for their vehicle simulator. As an added benefit of this work, advancements in simulating sand can also benefit scientists here on earth, especially in regard to understanding landslides and debris flows.

  2. A dynamic feedforward neural network based on gaussian particle swarm optimization and its application for predictive control.

    PubMed

    Han, Min; Fan, Jianchao; Wang, Jun

    2011-09-01

    A dynamic feedforward neural network (DFNN) is proposed for predictive control, whose adaptive parameters are adjusted by using Gaussian particle swarm optimization (GPSO) in the training process. Adaptive time-delay operators are added in the DFNN to improve its generalization for poorly known nonlinear dynamic systems with long time delays. Furthermore, GPSO adopts a chaotic map with Gaussian function to balance the exploration and exploitation capabilities of particles, which improves the computational efficiency without compromising the performance of the DFNN. The stability of the particle dynamics is analyzed, based on the robust stability theory, without any restrictive assumption. A stability condition for the GPSO+DFNN model is derived, which ensures a satisfactory global search and quick convergence, without the need for gradients. The particle velocity ranges could change adaptively during the optimization process. The results of a comparative study show that the performance of the proposed algorithm can compete with selected algorithms on benchmark problems. Additional simulation results demonstrate the effectiveness and accuracy of the proposed combination algorithm in identifying and controlling nonlinear systems with long time delays.

  3. A multi-dimensional nonlinearly implicit, electromagnetic Vlasov-Darwin particle-in-cell (PIC) algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Guangye; Chacón, Luis; CoCoMans Team

    2014-10-01

    For decades, the Vlasov-Darwin model has been recognized to be attractive for PIC simulations (to avoid radiative noise issues) in non-radiative electromagnetic regimes. However, the Darwin model results in elliptic field equations that renders explicit time integration unconditionally unstable. Improving on linearly implicit schemes, fully implicit PIC algorithms for both electrostatic and electromagnetic regimes, with exact discrete energy and charge conservation properties, have been recently developed in 1D. This study builds on these recent algorithms to develop an implicit, orbit-averaged, time-space-centered finite difference scheme for the particle-field equations in multiple dimensions. The algorithm conserves energy, charge, and canonical-momentum exactly, even with grid packing. A simple fluid preconditioner allows efficient use of large timesteps, O (√{mi/me}c/veT) larger than the explicit CFL. We demonstrate the accuracy and efficiency properties of the of the algorithm with various numerical experiments in 2D3V.

  4. Particle swarm optimization based space debris surveillance network scheduling

    NASA Astrophysics Data System (ADS)

    Jiang, Hai; Liu, Jing; Cheng, Hao-Wen; Zhang, Yao

    2017-02-01

    The increasing number of space debris has created an orbital debris environment that poses increasing impact risks to existing space systems and human space flights. For the safety of in-orbit spacecrafts, we should optimally schedule surveillance tasks for the existing facilities to allocate resources in a manner that most significantly improves the ability to predict and detect events involving affected spacecrafts. This paper analyzes two criteria that mainly affect the performance of a scheduling scheme and introduces an artificial intelligence algorithm into the scheduling of tasks of the space debris surveillance network. A new scheduling algorithm based on the particle swarm optimization algorithm is proposed, which can be implemented in two different ways: individual optimization and joint optimization. Numerical experiments with multiple facilities and objects are conducted based on the proposed algorithm, and simulation results have demonstrated the effectiveness of the proposed algorithm.

  5. Different Scalable Implementations of Collision and Streaming for Optimal Computational Performance of Lattice Boltzmann Simulations

    NASA Astrophysics Data System (ADS)

    Geneva, Nicholas; Wang, Lian-Ping

    2015-11-01

    In the past 25 years, the mesoscopic lattice Boltzmann method (LBM) has become an increasingly popular approach to simulate incompressible flows including turbulent flows. While LBM solves more solution variables compared to the conventional CFD approach based on the macroscopic Navier-Stokes equation, it also offers opportunities for more efficient parallelization. In this talk we will describe several different algorithms that have been developed over the past 10 plus years, which can be used to represent the two core steps of LBM, collision and streaming, more effectively than standard approaches. The application of these algorithms spans LBM simulations ranging from basic channel to particle laden flows. We will cover the essential detail on the implementation of each algorithm for simple 2D flows, to the challenges one faces when using a given algorithm for more complex simulations. The key is to explore the best use of data structure and cache memory. Two basic data structures will be discussed and the importance of effective data storage to maximize a CPU's cache will be addressed. The performance of a 3D turbulent channel flow simulation using these different algorithms and data structures will be compared along with important hardware related issues.

  6. Monte Carlo charged-particle tracking and energy deposition on a Lagrangian mesh.

    PubMed

    Yuan, J; Moses, G A; McKenty, P W

    2005-10-01

    A Monte Carlo algorithm for alpha particle tracking and energy deposition on a cylindrical computational mesh in a Lagrangian hydrodynamics code used for inertial confinement fusion (ICF) simulations is presented. The straight line approximation is used to follow propagation of "Monte Carlo particles" which represent collections of alpha particles generated from thermonuclear deuterium-tritium (DT) reactions. Energy deposition in the plasma is modeled by the continuous slowing down approximation. The scheme addresses various aspects arising in the coupling of Monte Carlo tracking with Lagrangian hydrodynamics; such as non-orthogonal severely distorted mesh cells, particle relocation on the moving mesh and particle relocation after rezoning. A comparison with the flux-limited multi-group diffusion transport method is presented for a polar direct drive target design for the National Ignition Facility. Simulations show the Monte Carlo transport method predicts about earlier ignition than predicted by the diffusion method, and generates higher hot spot temperature. Nearly linear speed-up is achieved for multi-processor parallel simulations.

  7. Uncertainty in simulated groundwater-quality trends in transient flow

    USGS Publications Warehouse

    Starn, J. Jeffrey; Bagtzoglou, Amvrossios; Robbins, Gary A.

    2013-01-01

    In numerical modeling of groundwater flow, the result of a given solution method is affected by the way in which transient flow conditions and geologic heterogeneity are simulated. An algorithm is demonstrated that simulates breakthrough curves at a pumping well by convolution-based particle tracking in a transient flow field for several synthetic basin-scale aquifers. In comparison to grid-based (Eulerian) methods, the particle (Lagrangian) method is better able to capture multimodal breakthrough caused by changes in pumping at the well, although the particle method may be apparently nonlinear because of the discrete nature of particle arrival times. Trial-and-error choice of number of particles and release times can perhaps overcome the apparent nonlinearity. Heterogeneous aquifer properties tend to smooth the effects of transient pumping, making it difficult to separate their effects in parameter estimation. Porosity, a new parameter added for advective transport, can be accurately estimated using both grid-based and particle-based methods, but predictions can be highly uncertain, even in the simple, nonreactive case.

  8. Molecular Dynamics Simulations of Carbon Nanotubes in Water

    NASA Technical Reports Server (NTRS)

    Walther, J. H.; Jaffe, R.; Halicioglu, T.; Koumoutsakos, P.

    2000-01-01

    We study the hydrophobic/hydrophilic behavior of carbon nanotubes using molecular dynamics simulations. The energetics of the carbon-water interface are mainly dispersive but in the present study augmented with a carbon quadrupole term acting on the charge sites of the water. The simulations indicate that this contribution is negligible in terms of modifying the structural properties of water at the interface. Simulations of two carbon nanotubes in water display a wetting and drying of the interface between the nanotubes depending on their initial spacing. Thus, initial tube spacings of 7 and 8 A resulted in a drying of the interface whereas spacing of > 9 A remain wet during the course of the simulation. Finally, we present a novel particle-particle-particle-mesh algorithm for long range potentials which allows for general (curvilinear) meshes and "black-box" fast solvers by adopting an influence matrix technique.

  9. Concentrating small particles in protoplanetary disks through the streaming instability

    NASA Astrophysics Data System (ADS)

    Yang, C.-C.; Johansen, A.; Carrera, D.

    2017-10-01

    Laboratory experiments indicate that direct growth of silicate grains via mutual collisions can only produce particles up to roughly millimeters in size. On the other hand, recent simulations of the streaming instability have shown that mm/cm-sized particles require an excessively high metallicity for dense filaments to emerge. Using a numerical algorithm for stiff mutual drag force, we perform simulations of small particles with significantly higher resolutions and longer simulation times than in previous investigations. We find that particles of dimensionless stopping time τs = 10-2 and 10-3 - representing cm- and mm-sized particles interior of the water ice line - concentrate themselves via the streaming instability at a solid abundance of a few percent. We thus revise a previously published critical solid abundance curve for the regime of τs ≪ 1. The solid density in the concentrated regions reaches values higher than the Roche density, indicating that direct collapse of particles down to mm sizes into planetesimals is possible. Our results hence bridge the gap in particle size between direct dust growth limited by bouncing and the streaming instability.

  10. Simulation of bipolar charge transport in nanocomposite polymer films

    NASA Astrophysics Data System (ADS)

    Lean, Meng H.; Chu, Wei-Ping L.

    2015-03-01

    This paper describes 3D particle-in-cell simulation of bipolar charge injection and transport through nanocomposite film comprised of ferroelectric ceramic nanofillers in an amorphous polymer matrix. The classical electrical double layer (EDL) model for a monopolar core is extended (eEDL) to represent the nanofiller by replacing it with a dipolar core. Charge injection at the electrodes assumes metal-polymer Schottky emission at low to moderate fields and Fowler-Nordheim tunneling at high fields. Injected particles migrate via field-dependent Poole-Frenkel mobility and recombine with Monte Carlo selection. The simulation algorithm uses a boundary integral equation method for solution of the Poisson equation coupled with a second-order predictor-corrector scheme for robust time integration of the equations of motion. The stability criterion of the explicit algorithm conforms to the Courant-Friedrichs-Levy limit assuring robust and rapid convergence. The model is capable of simulating a wide dynamic range spanning leakage current to pre-breakdown. Simulation results for BaTiO3 nanofiller in amorphous polymer matrix indicate that charge transport behavior depend on nanoparticle polarization with anti-parallel orientation showing the highest leakage conduction and therefore lowest level of charge trapping in the interaction zone. Charge recombination is also highest, at the cost of reduced leakage conduction charge. The eEDL model predicts the meandering pathways of charge particle trajectories.

  11. [Optimization of the parameters of microcirculatory structural adaptation model based on improved quantum-behaved particle swarm optimization algorithm].

    PubMed

    Pan, Qing; Yao, Jialiang; Wang, Ruofan; Cao, Ping; Ning, Gangmin; Fang, Luping

    2017-08-01

    The vessels in the microcirculation keep adjusting their structure to meet the functional requirements of the different tissues. A previously developed theoretical model can reproduce the process of vascular structural adaptation to help the study of the microcirculatory physiology. However, until now, such model lacks the appropriate methods for its parameter settings with subsequent limitation of further applications. This study proposed an improved quantum-behaved particle swarm optimization (QPSO) algorithm for setting the parameter values in this model. The optimization was performed on a real mesenteric microvascular network of rat. The results showed that the improved QPSO was superior to the standard particle swarm optimization, the standard QPSO and the previously reported Downhill algorithm. We conclude that the improved QPSO leads to a better agreement between mathematical simulation and animal experiment, rendering the model more reliable in future physiological studies.

  12. Bravyi-Kitaev Superfast simulation of electronic structure on a quantum computer.

    PubMed

    Setia, Kanav; Whitfield, James D

    2018-04-28

    Present quantum computers often work with distinguishable qubits as their computational units. In order to simulate indistinguishable fermionic particles, it is first required to map the fermionic state to the state of the qubits. The Bravyi-Kitaev Superfast (BKSF) algorithm can be used to accomplish this mapping. The BKSF mapping has connections to quantum error correction and opens the door to new ways of understanding fermionic simulation in a topological context. Here, we present the first detailed exposition of the BKSF algorithm for molecular simulation. We provide the BKSF transformed qubit operators and report on our implementation of the BKSF fermion-to-qubits transform in OpenFermion. In this initial study of a hydrogen molecule we have compared BKSF, Jordan-Wigner, and Bravyi-Kitaev transforms under the Trotter approximation. The gate count to implement BKSF is lower than Jordan-Wigner but higher than Bravyi-Kitaev. We considered different orderings of the exponentiated terms and found lower Trotter errors than the previously reported for Jordan-Wigner and Bravyi-Kitaev algorithms. These results open the door to the further study of the BKSF algorithm for quantum simulation.

  13. LLNL Mercury Project Trinity Open Science Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brantley, Patrick; Dawson, Shawn; McKinley, Scott

    2016-04-20

    The Mercury Monte Carlo particle transport code developed at Lawrence Livermore National Laboratory (LLNL) is used to simulate the transport of radiation through urban environments. These challenging calculations include complicated geometries and require significant computational resources to complete. As a result, a question arises as to the level of convergence of the calculations with Monte Carlo simulation particle count. In the Trinity Open Science calculations, one main focus was to investigate convergence of the relevant simulation quantities with Monte Carlo particle count to assess the current simulation methodology. Both for this application space but also of more general applicability, wemore » also investigated the impact of code algorithms on parallel scaling on the Trinity machine as well as the utilization of the Trinity DataWarp burst buffer technology in Mercury via the LLNL Scalable Checkpoint/Restart (SCR) library.« less

  14. Simulation of large particle transport near the surface under stable conditions: comparison with the Hanford tracer experiments

    NASA Astrophysics Data System (ADS)

    Kim, Eugene; Larson, Timothy

    A plume model is presented describing the downwind transport of large particles (1-100 μm) under stable conditions. The model includes both vertical variations in wind speed and turbulence intensity as well as an algorithm for particle deposition at the surface. Model predictions compare favorably with the Hanford single and dual tracer experiments of crosswind integrated concentration (for particles: relative bias=-0.02 and 0.16, normalized mean square error=0.61 and 0.14, for the single and dual tracer experiments, respectively), whereas the US EPA's fugitive dust model consistently overestimates the observed concentrations at downwind distances beyond several hundred meters (for particles: relative bias=0.31 and 2.26, mean square error=0.42 and 1.71, respectively). For either plume model, the measured ratio of particle to gas concentration is consistently overestimated when using the deposition velocity algorithm of Sehmel and Hodgson (1978. DOE Report PNL-SA-6721, Pacific Northwest Laboratories, Richland, WA). In contrast, these same ratios are predicted with relatively little bias when using the algorithm of Kim et al. (2000. Atmospheric Environment 34 (15), 2387-2397).

  15. FPGA Implementation of an Efficient Algorithm for the Calculation of Charged Particle Trajectories in Cosmic Ray Detectors

    NASA Astrophysics Data System (ADS)

    Villar, Xabier; Piso, Daniel; Bruguera, Javier D.

    2014-02-01

    This paper presents an FPGA implementation of an algorithm, previously published, for the the reconstruction of cosmic rays' trajectories and the determination of the time of arrival and velocity of the particles. The accuracy and precision issues of the algorithm have been analyzed to propose a suitable implementation. Thus, a 32-bit fixed-point format has been used for the representation of the data values. Moreover, the dependencies among the different operations have been taken into account to obtain a highly parallel and efficient hardware implementation. The final hardware architecture requires 18 cycles to process every particle, and has been exhaustively simulated to validate all the design decisions. The architecture has been mapped over different commercial FPGAs, with a frequency of operation ranging from 300 MHz to 1.3 GHz, depending on the FPGA being used. Consequently, the number of particle trajectories processed per second is between 16 million and 72 million. The high number of particle trajectories calculated per second shows that the proposed FPGA implementation might be used also in high rate environments such as those found in particle and nuclear physics experiments.

  16. A Hybrid Method for Accelerated Simulation of Coulomb Collisions in a Plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caflisch, R; Wang, C; Dimarco, G

    2007-10-09

    If the collisional time scale for Coulomb collisions is comparable to the characteristic time scales for a plasma, then simulation of Coulomb collisions may be important for computation of kinetic plasma dynamics. This can be a computational bottleneck because of the large number of simulated particles and collisions (or phase-space resolution requirements in continuum algorithms), as well as the wide range of collision rates over the velocity distribution function. This paper considers Monte Carlo simulation of Coulomb collisions using the binary collision models of Takizuka & Abe and Nanbu. It presents a hybrid method for accelerating the computation of Coulombmore » collisions. The hybrid method represents the velocity distribution function as a combination of a thermal component (a Maxwellian distribution) and a kinetic component (a set of discrete particles). Collisions between particles from the thermal component preserve the Maxwellian; collisions between particles from the kinetic component are performed using the method of or Nanbu. Collisions between the kinetic and thermal components are performed by sampling a particle from the thermal component and selecting a particle from the kinetic component. Particles are also transferred between the two components according to thermalization and dethermalization probabilities, which are functions of phase space.« less

  17. Dynamic particle refinement in SPH: application to free surface flow and non-cohesive soil simulations

    NASA Astrophysics Data System (ADS)

    Reyes López, Yaidel; Roose, Dirk; Recarey Morfa, Carlos

    2013-05-01

    In this paper, we present a dynamic refinement algorithm for the smoothed particle Hydrodynamics (SPH) method. An SPH particle is refined by replacing it with smaller daughter particles, which positions are calculated by using a square pattern centered at the position of the refined particle. We determine both the optimal separation and the smoothing distance of the new particles such that the error produced by the refinement in the gradient of the kernel is small and possible numerical instabilities are reduced. We implemented the dynamic refinement procedure into two different models: one for free surface flows, and one for post-failure flow of non-cohesive soil. The results obtained for the test problems indicate that using the dynamic refinement procedure provides a good trade-off between the accuracy and the cost of the simulations.

  18. The pseudo-compartment method for coupling partial differential equation and compartment-based models of diffusion.

    PubMed

    Yates, Christian A; Flegg, Mark B

    2015-05-06

    Spatial reaction-diffusion models have been employed to describe many emergent phenomena in biological systems. The modelling technique most commonly adopted in the literature implements systems of partial differential equations (PDEs), which assumes there are sufficient densities of particles that a continuum approximation is valid. However, owing to recent advances in computational power, the simulation and therefore postulation, of computationally intensive individual-based models has become a popular way to investigate the effects of noise in reaction-diffusion systems in which regions of low copy numbers exist. The specific stochastic models with which we shall be concerned in this manuscript are referred to as 'compartment-based' or 'on-lattice'. These models are characterized by a discretization of the computational domain into a grid/lattice of 'compartments'. Within each compartment, particles are assumed to be well mixed and are permitted to react with other particles within their compartment or to transfer between neighbouring compartments. Stochastic models provide accuracy, but at the cost of significant computational resources. For models that have regions of both low and high concentrations, it is often desirable, for reasons of efficiency, to employ coupled multi-scale modelling paradigms. In this work, we develop two hybrid algorithms in which a PDE in one region of the domain is coupled to a compartment-based model in the other. Rather than attempting to balance average fluxes, our algorithms answer a more fundamental question: 'how are individual particles transported between the vastly different model descriptions?' First, we present an algorithm derived by carefully redefining the continuous PDE concentration as a probability distribution. While this first algorithm shows very strong convergence to analytical solutions of test problems, it can be cumbersome to simulate. Our second algorithm is a simplified and more efficient implementation of the first, it is derived in the continuum limit over the PDE region alone. We test our hybrid methods for functionality and accuracy in a variety of different scenarios by comparing the averaged simulations with analytical solutions of PDEs for mean concentrations. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acciarri, R.; Adams, C.; An, R.

    Here, we present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. Lastly, we also address technical issues that arise when applying this technique to data from a large LArTPCmore » at or near ground level.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acciarri, R.; Adams, C.; An, R.

    We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at ormore » near ground level.« less

  1. A treecode to simulate dust-plasma interactions

    NASA Astrophysics Data System (ADS)

    Thomas, D. M.; Holgate, J. T.

    2017-02-01

    The interaction of a small object with surrounding plasma is an area of plasma-physics research with a multitude of applications. This paper introduces the plasma octree code pot, a microscopic simulator of a spheroidal dust grain in a plasma. pot uses the Barnes-Hut treecode algorithm to perform N-body simulations of electrons and ions in the vicinity of a chargeable spheroid, employing also the Boris particle-motion integrator and Hutchinson’s reinjection algorithm from SCEPTIC; a description of the implementation of all three algorithms is provided. We present results from pot simulations of the charging of spheres in magnetised plasmas, and of spheroids in unmagnetized plasmas. The results call into question the validity of using the Boltzmann relation in hybrid PIC codes. Substantial portions of this paper are adapted from chapters 4 and 5 of the first author’s recent PhD dissertation.

  2. Simulation tools for particle-based reaction-diffusion dynamics in continuous space

    PubMed Central

    2014-01-01

    Particle-based reaction-diffusion algorithms facilitate the modeling of the diffusional motion of individual molecules and the reactions between them in cellular environments. A physically realistic model, depending on the system at hand and the questions asked, would require different levels of modeling detail such as particle diffusion, geometrical confinement, particle volume exclusion or particle-particle interaction potentials. Higher levels of detail usually correspond to increased number of parameters and higher computational cost. Certain systems however, require these investments to be modeled adequately. Here we present a review on the current field of particle-based reaction-diffusion software packages operating on continuous space. Four nested levels of modeling detail are identified that capture incrementing amount of detail. Their applicability to different biological questions is discussed, arching from straight diffusion simulations to sophisticated and expensive models that bridge towards coarse grained molecular dynamics. PMID:25737778

  3. Fast Particle Methods for Multiscale Phenomena Simulations

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew

    2000-01-01

    We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.

  4. SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction

    PubMed Central

    Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.

    2015-01-01

    Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831

  5. Assessment of Chlorophyll-a Algorithms Considering Different Trophic Statuses and Optimal Bands.

    PubMed

    Salem, Salem Ibrahim; Higa, Hiroto; Kim, Hyungjun; Kobayashi, Hiroshi; Oki, Kazuo; Oki, Taikan

    2017-07-31

    Numerous algorithms have been proposed to retrieve chlorophyll- a concentrations in Case 2 waters; however, the retrieval accuracy is far from satisfactory. In this research, seven algorithms are assessed with different band combinations of multispectral and hyperspectral bands using linear (LN), quadratic polynomial (QP) and power (PW) regression approaches, resulting in altogether 43 algorithmic combinations. These algorithms are evaluated by using simulated and measured datasets to understand the strengths and limitations of these algorithms. Two simulated datasets comprising 500,000 reflectance spectra each, both based on wide ranges of inherent optical properties (IOPs), are generated for the calibration and validation stages. Results reveal that the regression approach (i.e., LN, QP, and PW) has more influence on the simulated dataset than on the measured one. The algorithms that incorporated linear regression provide the highest retrieval accuracy for the simulated dataset. Results from simulated datasets reveal that the 3-band (3b) algorithm that incorporate 665-nm and 680-nm bands and band tuning selection approach outperformed other algorithms with root mean square error (RMSE) of 15.87 mg·m -3 , 16.25 mg·m -3 , and 19.05 mg·m -3 , respectively. The spatial distribution of the best performing algorithms, for various combinations of chlorophyll- a (Chla) and non-algal particles (NAP) concentrations, show that the 3b_tuning_QP and 3b_680_QP outperform other algorithms in terms of minimum RMSE frequency of 33.19% and 60.52%, respectively. However, the two algorithms failed to accurately retrieve Chla for many combinations of Chla and NAP, particularly for low Chla and NAP concentrations. In addition, the spatial distribution emphasizes that no single algorithm can provide outstanding accuracy for Chla retrieval and that multi-algorithms should be included to reduce the error. Comparing the results of the measured and simulated datasets reveal that the algorithms that incorporate the 665-nm band outperform other algorithms for measured dataset (RMSE = 36.84 mg·m -3 ), while algorithms that incorporate the band tuning approach provide the highest retrieval accuracy for the simulated dataset (RMSE = 25.05 mg·m -3 ).

  6. Assessment of Chlorophyll-a Algorithms Considering Different Trophic Statuses and Optimal Bands

    PubMed Central

    Higa, Hiroto; Kobayashi, Hiroshi; Oki, Kazuo

    2017-01-01

    Numerous algorithms have been proposed to retrieve chlorophyll-a concentrations in Case 2 waters; however, the retrieval accuracy is far from satisfactory. In this research, seven algorithms are assessed with different band combinations of multispectral and hyperspectral bands using linear (LN), quadratic polynomial (QP) and power (PW) regression approaches, resulting in altogether 43 algorithmic combinations. These algorithms are evaluated by using simulated and measured datasets to understand the strengths and limitations of these algorithms. Two simulated datasets comprising 500,000 reflectance spectra each, both based on wide ranges of inherent optical properties (IOPs), are generated for the calibration and validation stages. Results reveal that the regression approach (i.e., LN, QP, and PW) has more influence on the simulated dataset than on the measured one. The algorithms that incorporated linear regression provide the highest retrieval accuracy for the simulated dataset. Results from simulated datasets reveal that the 3-band (3b) algorithm that incorporate 665-nm and 680-nm bands and band tuning selection approach outperformed other algorithms with root mean square error (RMSE) of 15.87 mg·m−3, 16.25 mg·m−3, and 19.05 mg·m−3, respectively. The spatial distribution of the best performing algorithms, for various combinations of chlorophyll-a (Chla) and non-algal particles (NAP) concentrations, show that the 3b_tuning_QP and 3b_680_QP outperform other algorithms in terms of minimum RMSE frequency of 33.19% and 60.52%, respectively. However, the two algorithms failed to accurately retrieve Chla for many combinations of Chla and NAP, particularly for low Chla and NAP concentrations. In addition, the spatial distribution emphasizes that no single algorithm can provide outstanding accuracy for Chla retrieval and that multi-algorithms should be included to reduce the error. Comparing the results of the measured and simulated datasets reveal that the algorithms that incorporate the 665-nm band outperform other algorithms for measured dataset (RMSE = 36.84 mg·m−3), while algorithms that incorporate the band tuning approach provide the highest retrieval accuracy for the simulated dataset (RMSE = 25.05 mg·m−3). PMID:28758984

  7. Stochastic Simulation of Lagrangian Particle Transport in Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Sun, Guangyuan

    This dissertation presents the development and validation of the One Dimensional Turbulence (ODT) multiphase model in the Lagrangian reference frame. ODT is a stochastic model that captures the full range of length and time scales and provides statistical information on fine-scale turbulent-particle mixing and transport at low computational cost. The flow evolution is governed by a deterministic solution of the viscous processes and a stochastic representation of advection through stochastic domain mapping processes. The three algorithms for Lagrangian particle transport are presented within the context of the ODT approach. The Type-I and -C models consider the particle-eddy interaction as instantaneous and continuous change of the particle position and velocity, respectively. The Type-IC model combines the features of the Type-I and -C models. The models are applied to the multi-phase flows in the homogeneous decaying turbulence and turbulent round jet. Particle dispersion, dispersion coefficients, and velocity statistics are predicted and compared with experimental data. The models accurately reproduces the experimental data sets and capture particle inertial effects and trajectory crossing effect. A new adjustable particle parameter is introduced into the ODT model, and sensitivity analysis is performed to facilitate parameter estimation and selection. A novel algorithm of the two-way momentum coupling between the particle and carrier phases is developed in the ODT multiphase model. Momentum exchange between the phases is accounted for through particle source terms in the viscous diffusion. The source term is implemented in eddy events through a new kernel transformation and an iterative procedure is required for eddy selection. This model is applied to a particle-laden turbulent jet flow, and simulation results are compared with experimental measurements. The effect of particle addition on the velocities of the gas phase is investigated. The development of particle velocity and particle number distribution are illustrated. The simulation results indicate that the model qualitatively captures the turbulent modulation with the presence of difference particle classes with different solid loadings. The model is then extended to simulate temperature evolution of the particles in a nonisothermal hot jet, in which heat transfer between the particles and gas is considered. The flow is bounded by a wall on the one side of the domain. The simulations are performed over a range of particle inertia and thermal relaxation time scales and different initial particle locations. The present study investigates the post-blast-phase mixing between the particles, the environment that is intended to heat them up, and the ambient environment that dilutes the jet flow. The results indicate that the model can qualitatively predict the important particle statistics in jet flame.

  8. TESS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dmitriy Morozov, Tom Peterka

    2014-07-29

    Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets. As the scale of simulations and observations surpasses billions of particles, a distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this software is a distributed-memory parallel Delaunay and Voronoi tessellation algorithm based on existing serial computational geometry libraries that automatically determines which neighbor points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include the addition of periodic and wall boundary conditions.

  9. Lattice-Boltzmann simulations of microswimmer-tracer interactions

    NASA Astrophysics Data System (ADS)

    de Graaf, Joost; Stenhammar, Joakim

    2017-02-01

    Hydrodynamic interactions in systems composed of self-propelled particles, such as swimming microorganisms and passive tracers, have a significant impact on the tracer dynamics compared to the equivalent "dry" sample. However, such interactions are often difficult to take into account in simulations due to their computational cost. Here, we perform a systematic investigation of swimmer-tracer interaction using an efficient force-counterforce-based lattice-Boltzmann (LB) algorithm [De Graaf et al., J. Chem. Phys. 144, 134106 (2016), 10.1063/1.4944962] in order to validate its ability to capture the relevant low-Reynolds-number physics. We show that the LB algorithm reproduces far-field theoretical results well, both in a system with periodic boundary conditions and in a spherical cavity with no-slip walls, for which we derive expressions here. The force-lattice coupling of the LB algorithm leads to a "smearing out" of the flow field, which strongly perturbs the tracer trajectories at close swimmer-tracer separations, and we analyze how this effect can be accurately captured using a simple renormalized hydrodynamic theory. Finally, we show that care must be taken when using LB algorithms to simulate systems of self-propelled particles, since its finite momentum transport time can lead to significant deviations from theoretical predictions based on Stokes flow. These insights should prove relevant to the future study of large-scale microswimmer suspensions using these methods.

  10. A Comparison of 3D3C Velocity Measurement Techniques

    NASA Astrophysics Data System (ADS)

    La Foy, Roderick; Vlachos, Pavlos

    2013-11-01

    The velocity measurement fidelity of several 3D3C PIV measurement techniques including tomographic PIV, synthetic aperture PIV, plenoptic PIV, defocusing PIV, and 3D PTV are compared in simulations. A physically realistic ray-tracing algorithm is used to generate synthetic images of a standard calibration grid and of illuminated particle fields advected by homogeneous isotropic turbulence. The simulated images for the tomographic, synthetic aperture, and plenoptic PIV cases are then used to create three-dimensional reconstructions upon which cross-correlations are performed to yield the measured velocity field. Particle tracking algorithms are applied to the images for the defocusing PIV and 3D PTV to directly yield the three-dimensional velocity field. In all cases the measured velocity fields are compared to one-another and to the true velocity field using several metrics.

  11. Subpixel displacement measurement method based on the combination of particle swarm optimization and gradient algorithm

    NASA Astrophysics Data System (ADS)

    Guang, Chen; Qibo, Feng; Keqin, Ding; Zhan, Gao

    2017-10-01

    A subpixel displacement measurement method based on the combination of particle swarm optimization (PSO) and gradient algorithm (GA) was proposed for accuracy and speed optimization in GA, which is a subpixel displacement measurement method better applied in engineering practice. An initial integer-pixel value was obtained according to the global searching ability of PSO, and then gradient operators were adopted for a subpixel displacement search. A comparison was made between this method and GA by simulated speckle images and rigid-body displacement in metal specimens. The results showed that the computational accuracy of the combination of PSO and GA method reached 0.1 pixel in the simulated speckle images, or even 0.01 pixels in the metal specimen. Also, computational efficiency and the antinoise performance of the improved method were markedly enhanced.

  12. Analytical solutions for coagulation and condensation kinetics of composite particles

    NASA Astrophysics Data System (ADS)

    Piskunov, Vladimir N.

    2013-04-01

    The processes of composite particles formation consisting of a mixture of different materials are essential for many practical problems: for analysis of the consequences of accidental releases in atmosphere; for simulation of precipitation formation in clouds; for description of multi-phase processes in chemical reactors and industrial facilities. Computer codes developed for numerical simulation of these processes require optimization of computational methods and verification of numerical programs. Kinetic equations of composite particle formation are given in this work in a concise form (impurity integrated). Coagulation, condensation and external sources associated with nucleation are taken into account. Analytical solutions were obtained in a number of model cases. The general laws for fraction redistribution of impurities were defined. The results can be applied to develop numerical algorithms considerably reducing the simulation effort, as well as to verify the numerical programs for calculation of the formation kinetics of composite particles in the problems of practical importance.

  13. A Monte Carlo simulation study for the gamma-ray/neutron dual-particle imager using rotational modulation collimator (RMC).

    PubMed

    Kim, Hyun Suk; Choi, Hong Yeop; Lee, Gyemin; Ye, Sung-Joon; Smith, Martin B; Kim, Geehyun

    2018-03-01

    The aim of this work is to develop a gamma-ray/neutron dual-particle imager, based on rotational modulation collimators (RMCs) and pulse shape discrimination (PSD)-capable scintillators, for possible applications for radioactivity monitoring as well as nuclear security and safeguards. A Monte Carlo simulation study was performed to design an RMC system for the dual-particle imaging, and modulation patterns were obtained for gamma-ray and neutron sources in various configurations. We applied an image reconstruction algorithm utilizing the maximum-likelihood expectation-maximization method based on the analytical modeling of source-detector configurations, to the Monte Carlo simulation results. Both gamma-ray and neutron source distributions were reconstructed and evaluated in terms of signal-to-noise ratio, showing the viability of developing an RMC-based gamma-ray/neutron dual-particle imager using PSD-capable scintillators.

  14. Direct Simulation of Friction Forces for Heavy Ions Interacting with a Warm Magnetized Electron Distribution

    NASA Astrophysics Data System (ADS)

    Bruhwiler, D. L.; Busby, R.; Fedotov, A. V.; Ben-Zvi, I.; Cary, J. R.; Stoltz, P.; Burov, A.; Litvinenko, V. N.; Messmer, P.; Abell, D.; Nieter, C.

    2005-06-01

    A proposed luminosity upgrade to RHIC includes a novel electron cooling section, which would use ˜55 MeV electrons to cool fully-ionized 100 GeV/nucleon gold ions. High-current bunched electron beams are required for the RHIC cooler, resulting in very high transverse temperatures and relatively low values for the magnetized cooling logarithm. The accuracy of analytical formulae in this regime requires careful examination. Simulations of the friction coefficient, using the VORPAL code, for single gold ions passing once through the interaction region, are compared with theoretical calculations. Charged particles are advanced using a fourth-order Hermite predictor-corrector algorithm. The fields in the beam frame are obtained from direct calculation of Coulomb's law, which is more efficient than multipole-type algorithms for less than ˜106 particles. Because the interaction time is so short, it is necessary to suppress the diffusive aspect of the ion dynamics through the careful use of positrons in the simulations.

  15. An improved weakly compressible SPH method for simulating free surface flows of viscous and viscoelastic fluids

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoyang; Deng, Xiao-Long

    2016-04-01

    In this paper, an improved weakly compressible smoothed particle hydrodynamics (SPH) method is proposed to simulate transient free surface flows of viscous and viscoelastic fluids. The improved SPH algorithm includes the implementation of (i) the mixed symmetric correction of kernel gradient to improve the accuracy and stability of traditional SPH method and (ii) the Rusanov flux in the continuity equation for improving the computation of pressure distributions in the dynamics of liquids. To assess the effectiveness of the improved SPH algorithm, a number of numerical examples including the stretching of an initially circular water drop, dam breaking flow against a vertical wall, the impact of viscous and viscoelastic fluid drop with a rigid wall, and the extrudate swell of viscoelastic fluid have been presented and compared with available numerical and experimental data in literature. The convergent behavior of the improved SPH algorithm has also been studied by using different number of particles. All numerical results demonstrate that the improved SPH algorithm proposed here is capable of modeling free surface flows of viscous and viscoelastic fluids accurately and stably, and even more important, also computing an accurate and little oscillatory pressure field.

  16. Optimal visual simulation of the self-tracking combustion of the infrared decoy based on the particle system

    NASA Astrophysics Data System (ADS)

    Hu, Qi; Duan, Jin; Wang, LiNing; Zhai, Di

    2016-09-01

    The high-efficiency simulation test of military weapons has a very important effect on the high cost of the actual combat test and the very demanding operational efficiency. Especially among the simulative emulation methods of the explosive smoke, the simulation method based on the particle system has generated much attention. In order to further improve the traditional simulative emulation degree of the movement process of the infrared decoy during the real combustion cycle, this paper, adopting the virtual simulation platform of OpenGL and Vega Prime and according to their own radiation characteristics and the aerodynamic characteristics of the infrared decoy, has simulated the dynamic fuzzy characteristics of the infrared decoy during the real combustion cycle by using particle system based on the double depth peeling algorithm and has solved key issues such as the interface, coordinate conversion and the retention and recovery of the Vega Prime's status. The simulation experiment has basically reached the expected improvement purpose, effectively improved the simulation fidelity and provided theoretical support for improving the performance of the infrared decoy.

  17. An efficient and reliable predictive method for fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-13

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  18. An efficient and reliable predictive method for fluidized bed simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-29

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  19. Advances in the simulation and automated measurement of well-sorted granular material: 1. Simulation

    USGS Publications Warehouse

    Daniel Buscombe,; Rubin, David M.

    2012-01-01

    1. In this, the first of a pair of papers which address the simulation and automated measurement of well-sorted natural granular material, a method is presented for simulation of two-phase (solid, void) assemblages of discrete non-cohesive particles. The purpose is to have a flexible, yet computationally and theoretically simple, suite of tools with well constrained and well known statistical properties, in order to simulate realistic granular material as a discrete element model with realistic size and shape distributions, for a variety of purposes. The stochastic modeling framework is based on three-dimensional tessellations with variable degrees of order in particle-packing arrangement. Examples of sediments with a variety of particle size distributions and spatial variability in grain size are presented. The relationship between particle shape and porosity conforms to published data. The immediate application is testing new algorithms for automated measurements of particle properties (mean and standard deviation of particle sizes, and apparent porosity) from images of natural sediment, as detailed in the second of this pair of papers. The model could also prove useful for simulating specific depositional structures found in natural sediments, the result of physical alterations to packing and grain fabric, using discrete particle flow models. While the principal focus here is on naturally occurring sediment and sedimentary rock, the methods presented might also be useful for simulations of similar granular or cellular material encountered in engineering, industrial and life sciences.

  20. Coding considerations for standalone molecular dynamics simulations of atomistic structures

    NASA Astrophysics Data System (ADS)

    Ocaya, R. O.; Terblans, J. J.

    2017-10-01

    The laws of Newtonian mechanics allow ab-initio molecular dynamics to model and simulate particle trajectories in material science by defining a differentiable potential function. This paper discusses some considerations for the coding of ab-initio programs for simulation on a standalone computer and illustrates the approach by C language codes in the context of embedded metallic atoms in the face-centred cubic structure. The algorithms use velocity-time integration to determine particle parameter evolution for up to several thousands of particles in a thermodynamical ensemble. Such functions are reusable and can be placed in a redistributable header library file. While there are both commercial and free packages available, their heuristic nature prevents dissection. In addition, developing own codes has the obvious advantage of teaching techniques applicable to new problems.

  1. Optimisation of 12 MeV electron beam simulation using variance reduction technique

    NASA Astrophysics Data System (ADS)

    Jayamani, J.; Termizi, N. A. S. Mohd; Kamarulzaman, F. N. Mohd; Aziz, M. Z. Abdul

    2017-05-01

    Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 107 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 107 to 20 × 107. In this study, 5 MeV electron cut-off with 10 × 107 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy.

  2. Determination of the spectral behaviour of atmospheric soot using different particle models

    NASA Astrophysics Data System (ADS)

    Skorupski, Krzysztof

    2017-08-01

    In the atmosphere, black carbon aggregates interact with both organic and inorganic matter. In many studies they are modeled using different, less complex, geometries. However, some common simplification might lead to many inaccuracies in the following light scattering simulations. The goal of this study was to compare the spectral behavior of different, commonly used soot particle models. For light scattering simulations, in the visible spectrum, the ADDA algorithm was used. The results prove that the relative extinction error δCext, in some cases, can be unexpectedly large. Therefore, before starting excessive simulations, it is important to know what error might occur.

  3. Accelerating Science with Generative Adversarial Networks: An Application to 3D Particle Showers in Multilayer Calorimeters

    NASA Astrophysics Data System (ADS)

    Paganini, Michela; de Oliveira, Luke; Nachman, Benjamin

    2018-01-01

    Physicists at the Large Hadron Collider (LHC) rely on detailed simulations of particle collisions to build expectations of what experimental data may look like under different theoretical modeling assumptions. Petabytes of simulated data are needed to develop analysis techniques, though they are expensive to generate using existing algorithms and computing resources. The modeling of detectors and the precise description of particle cascades as they interact with the material in the calorimeter are the most computationally demanding steps in the simulation pipeline. We therefore introduce a deep neural network-based generative model to enable high-fidelity, fast, electromagnetic calorimeter simulation. There are still challenges for achieving precision across the entire phase space, but our current solution can reproduce a variety of particle shower properties while achieving speedup factors of up to 100 000 × . This opens the door to a new era of fast simulation that could save significant computing time and disk space, while extending the reach of physics searches and precision measurements at the LHC and beyond.

  4. An Improved Quantum-Behaved Particle Swarm Optimization Algorithm with Elitist Breeding for Unconstrained Optimization.

    PubMed

    Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing

    2015-01-01

    An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate.

  5. Recent advances in nonlinear implicit, electrostatic particle-in-cell (PIC) algorithms

    NASA Astrophysics Data System (ADS)

    Chen, Guangye; Chacón, Luis; Barnes, Daniel

    2012-10-01

    An implicit 1D electrostatic PIC algorithmfootnotetextChen, Chac'on, Barnes, J. Comput. Phys. 230 (2011) has been developed that satisfies exact energy and charge conservation. The algorithm employs a kinetic-enslaved Jacobian-free Newton-Krylov methodfootnotetextIbid. that ensures nonlinear convergence while taking timesteps comparable to the dynamical timescale of interest. Here we present two main improvements of the algorithm. The first is the formulation of a preconditioner based on linearized fluid equations, which are closed using available particle information. The computational benefit is that solving the fluid system is much cheaper than the kinetic one. The effectiveness of the preconditioner in accelerating nonlinear iterations on challenging problems will be demonstrated. A second improvement is the generalization of Ref. 1 to curvilinear meshes,footnotetextChac'on, Chen, Barnes, J. Comput. Phys. submitted (2012) with a hybrid particle update of positions and velocities in logical and physical space respectively.footnotetextSwift, J. Comp. Phys., 126 (1996) The curvilinear algorithm remains exactly charge and energy-conserving, and can be extended to multiple dimensions. We demonstrate the accuracy and efficiency of the algorithm with a 1D ion-acoustic shock wave simulation.

  6. A strategy to couple the material point method (MPM) and smoothed particle hydrodynamics (SPH) computational techniques

    NASA Astrophysics Data System (ADS)

    Raymond, Samuel J.; Jones, Bruce; Williams, John R.

    2018-01-01

    A strategy is introduced to allow coupling of the material point method (MPM) and smoothed particle hydrodynamics (SPH) for numerical simulations. This new strategy partitions the domain into SPH and MPM regions, particles carry all state variables and as such no special treatment is required for the transition between regions. The aim of this work is to derive and validate the coupling methodology between MPM and SPH. Such coupling allows for general boundary conditions to be used in an SPH simulation without further augmentation. Additionally, as SPH is a purely particle method, and MPM is a combination of particles and a mesh. This coupling also permits a smooth transition from particle methods to mesh methods, where further coupling to mesh methods could in future provide an effective farfield boundary treatment for the SPH method. The coupling technique is introduced and described alongside a number of simulations in 1D and 2D to validate and contextualize the potential of using these two methods in a single simulation. The strategy shown here is capable of fully coupling the two methods without any complicated algorithms to transform information from one method to another.

  7. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation

    PubMed Central

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874

  8. Parking simulation of three-dimensional multi-sized star-shaped particles

    NASA Astrophysics Data System (ADS)

    Zhu, Zhigang; Chen, Huisu; Xu, Wenxiang; Liu, Lin

    2014-04-01

    The shape and size of particles may have a great impact on the microstructure as well as the physico-properties of particulate composites. However, it is challenging to configure a parking system of particles to a geometrical shape that is close to realistic grains in particulate composites. In this work, with the assistance of x-ray tomography and a spherical harmonic series, we present a star-shaped particle that is close to realistic arbitrary-shaped grains. To realize such a hard particle parking structure, an inter-particle overlapping detection algorithm is introduced. A serial sectioning approach is employed to visualize the particle parking structure for the purpose of justifying the reliability of the overlapping detection algorithm. Furthermore, the validity of the area and perimeter of solids in any arbitrary section of a plane calculated using a numerical method is verified by comparison with those obtained using an image analysis approach. This contribution is helpful to further understand the dependence of the micro-structure and physico-properties of star-shaped particles on the realistic geometrical shape.

  9. State-of-charge estimation in lithium-ion batteries: A particle filter approach

    NASA Astrophysics Data System (ADS)

    Tulsyan, Aditya; Tsai, Yiting; Gopaluni, R. Bhushan; Braatz, Richard D.

    2016-11-01

    The dynamics of lithium-ion batteries are complex and are often approximated by models consisting of partial differential equations (PDEs) relating the internal ionic concentrations and potentials. The Pseudo two-dimensional model (P2D) is one model that performs sufficiently accurately under various operating conditions and battery chemistries. Despite its widespread use for prediction, this model is too complex for standard estimation and control applications. This article presents an original algorithm for state-of-charge estimation using the P2D model. Partial differential equations are discretized using implicit stable algorithms and reformulated into a nonlinear state-space model. This discrete, high-dimensional model (consisting of tens to hundreds of states) contains implicit, nonlinear algebraic equations. The uncertainty in the model is characterized by additive Gaussian noise. By exploiting the special structure of the pseudo two-dimensional model, a novel particle filter algorithm that sweeps in time and spatial coordinates independently is developed. This algorithm circumvents the degeneracy problems associated with high-dimensional state estimation and avoids the repetitive solution of implicit equations by defining a 'tether' particle. The approach is illustrated through extensive simulations.

  10. Optimization of PID Parameters Utilizing Variable Weight Grey-Taguchi Method and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd

    2018-03-01

    Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.

  11. Simulator for beam-based LHC collimator alignment

    NASA Astrophysics Data System (ADS)

    Valentino, Gianluca; Aßmann, Ralph; Redaelli, Stefano; Sammut, Nicholas

    2014-02-01

    In the CERN Large Hadron Collider, collimators need to be set up to form a multistage hierarchy to ensure efficient multiturn cleaning of halo particles. Automatic algorithms were introduced during the first run to reduce the beam time required for beam-based setup, improve the alignment accuracy, and reduce the risk of human errors. Simulating the alignment procedure would allow for off-line tests of alignment policies and algorithms. A simulator was developed based on a diffusion beam model to generate the characteristic beam loss signal spike and decay produced when a collimator jaw touches the beam, which is observed in a beam loss monitor (BLM). Empirical models derived from the available measurement data are used to simulate the steady-state beam loss and crosstalk between multiple BLMs. The simulator design is presented, together with simulation results and comparison to measurement data.

  12. Design of optimised backstepping controller for the synchronisation of chaotic Colpitts oscillator using shark smell algorithm

    NASA Astrophysics Data System (ADS)

    Fouladi, Ehsan; Mojallali, Hamed

    2018-01-01

    In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master-slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm optimisation (PSO) algorithm. Simulation results show better performance in terms of accuracy and convergence for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller.

  13. Dynamic simulations of the inhomogeneous sedimentation of rigid fibres

    NASA Astrophysics Data System (ADS)

    Butler, Jason E.; Shaqfeh, Eric S. G.

    2002-10-01

    We have simulated the dynamics of suspensions of fibres sedimenting in the limit of zero Reynolds number. In these simulations, the dominant inter-particle force arises from hydrodynamic interactions between the rigid, non-Brownian fibres. The simulation algorithm uses slender-body theory to model the linear and rotational velocities of each fibre. To include far-field interactions between the fibres, the line distribution of force on each fibre is approximated by making a Legendre polynomial expansion of the disturbance velocity on the fibre, where only the first two terms of the expansion are retained in the calculation. Thus, the resulting linear force distribution can be specified completely by a centre-of-mass force, a couple, and a stresslet. Short-range interactions between particles are included using a lubrication approximation, and an infinite suspension is simulated by using periodic boundary conditions. Our numerical results confirm that the sedimentation of these non-spherical, orientable particles differs qualitatively from the sedimentation of spherical particles. The simulations demonstrate that an initially homogeneous, settling suspension develops clusters, or streamers, which are particle rich surrounded by clarified fluid. The instability which causes the heterogeneous structure arises solely from hydrodynamic interactions which couple the particle orientation and the sedimentation rate in particle clusters. Depending upon the concentration and aspect ratio, the formation of clusters of particles can enhance the sedimentation rate of the suspension to a value in excess of the maximum settling speed of an isolated particle. The suspension of fibres tends to orient with gravity during the sedimentation process. The average velocities and orientations, as well as their distributions, compare favourably with previous experimental measurements.

  14. Biomass particle models with realistic morphology and resolved microstructure for simulations of intraparticle transport phenomena

    DOE PAGES

    Ciesielski, Peter N.; Crowley, Michael F.; Nimlos, Mark R.; ...

    2014-12-09

    Biomass exhibits a complex microstructure of directional pores that impact how heat and mass are transferred within biomass particles during conversion processes. However, models of biomass particles used in simulations of conversion processes typically employ oversimplified geometries such as spheres and cylinders and neglect intraparticle microstructure. In this study, we develop 3D models of biomass particles with size, morphology, and microstructure based on parameters obtained from quantitative image analysis. We obtain measurements of particle size and morphology by analyzing large ensembles of particles that result from typical size reduction methods, and we delineate several representative size classes. Microstructural parameters, includingmore » cell wall thickness and cell lumen dimensions, are measured directly from micrographs of sectioned biomass. A general constructive solid geometry algorithm is presented that produces models of biomass particles based on these measurements. Next, we employ the parameters obtained from image analysis to construct models of three different particle size classes from two different feedstocks representing a hardwood poplar species ( Populus tremuloides, quaking aspen) and a softwood pine ( Pinus taeda, loblolly pine). Finally, we demonstrate the utility of the models and the effects explicit microstructure by performing finite-element simulations of intraparticle heat and mass transfer, and the results are compared to similar simulations using traditional simplified geometries. In conclusion, we show how the behavior of particle models with more realistic morphology and explicit microstructure departs from that of spherical models in simulations of transport phenomena and that species-dependent differences in microstructure impact simulation results in some cases.« less

  15. Interactions of non-spherical particles in simple flows

    NASA Astrophysics Data System (ADS)

    Niazi, Mehdi; Brandt, Luca; Costa, Pedro; Breugem, Wim-Paul

    2015-11-01

    The behavior of particles in a flow affects the global transport and rheological properties of the mixture. In recent years much effort has been therefore devoted to the development of an efficient method for the direct numerical simulation (DNS) of the motion of spherical rigid particles immersed in an incompressible fluid. However, the literature on non-spherical particle suspensions is quite scarce despite the fact that these are more frequent. We develop a numerical algorithm to simulate finite-size spheroid particles in shear flows to gain new understanding of the flow of particle suspensions. In particular, we wish to understand the role of inertia and its effect on the flow behavior. For this purpose, DNS simulations with a direct-forcing immersed boundary method are used, with collision and lubrication models for particle-particle and particle-wall interactions. We will discuss pair interactions, relative motion and rotation, of two sedimenting spheroids and show that the interaction time increases significantly for non-spherical particles. More interestingly, we show that the particles are attracted to each other from larger lateral displacements. This has important implications for collision kernels. This work was supported by the European Research Council Grant No. ERC-2013-CoG-616186, TRITOS, and by the Swedish Research Council (VR).

  16. Accelerating navigation in the VecGeom geometry modeller

    NASA Astrophysics Data System (ADS)

    Wenzel, Sandro; Zhang, Yang; pre="for the"> VecGeom Developers,

    2017-10-01

    The VecGeom geometry library is a relatively recent effort aiming to provide a modern and high performance geometry service for particle detector simulation in hierarchical detector geometries common to HEP experiments. One of its principal targets is the efficient use of vector SIMD hardware instructions to accelerate geometry calculations for single track as well as multi-track queries. Previously, excellent performance improvements compared to Geant4/ROOT could be reported for elementary geometry algorithms at the level of single shape queries. In this contribution, we will focus on the higher level navigation algorithms in VecGeom, which are the most important components as seen from the simulation engines. We will first report on our R&D effort and developments to implement SIMD enhanced data structures to speed up the well-known “voxelised” navigation algorithms, ubiquitously used for particle tracing in complex detector modules consisting of many daughter parts. Second, we will discuss complementary new approaches to improve navigation algorithms in HEP. These ideas are based on a systematic exploitation of static properties of the detector layout as well as automatic code generation and specialisation of the C++ navigator classes. Such specialisations reduce the overhead of generic- or virtual function based algorithms and enhance the effectiveness of the SIMD vector units. These novel approaches go well beyond the existing solutions available in Geant4 or TGeo/ROOT, achieve a significantly superior performance, and might be of interest for a wide range of simulation backends (GeantV, Geant4). We exemplify this with concrete benchmarks for the CMS and ALICE detectors.

  17. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    NASA Astrophysics Data System (ADS)

    Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.; Bagby, L.; Baller, B.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Bugel, L.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; James, C.; de Vries, J. Jan; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Jones, B. J. P.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; von Rohr, C. Rudolf; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van de Water, R. G.; Viren, B.; Weber, M.; Weston, J.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2017-03-01

    We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.

  18. Rapid Optimal SPH Particle Distributions in Spherical Geometries For Creating Astrophysical Initial Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raskin, Cody; Owen, J. Michael

    Creating spherical initial conditions in smoothed particle hydrodynamics simulations that are spherically conformal is a difficult task. Here in this paper, we describe two algorithmic methods for evenly distributing points on surfaces that when paired can be used to build three-dimensional spherical objects with optimal equipartition of volume between particles, commensurate with an arbitrary radial density function. We demonstrate the efficacy of our method against stretched lattice arrangements on the metrics of hydrodynamic stability, spherical conformity, and the harmonic power distribution of gravitational settling oscillations. We further demonstrate how our method is highly optimized for simulating multi-material spheres, such asmore » planets with core–mantle boundaries.« less

  19. Rapid Optimal SPH Particle Distributions in Spherical Geometries For Creating Astrophysical Initial Conditions

    DOE PAGES

    Raskin, Cody; Owen, J. Michael

    2016-03-24

    Creating spherical initial conditions in smoothed particle hydrodynamics simulations that are spherically conformal is a difficult task. Here in this paper, we describe two algorithmic methods for evenly distributing points on surfaces that when paired can be used to build three-dimensional spherical objects with optimal equipartition of volume between particles, commensurate with an arbitrary radial density function. We demonstrate the efficacy of our method against stretched lattice arrangements on the metrics of hydrodynamic stability, spherical conformity, and the harmonic power distribution of gravitational settling oscillations. We further demonstrate how our method is highly optimized for simulating multi-material spheres, such asmore » planets with core–mantle boundaries.« less

  20. RAPID OPTIMAL SPH PARTICLE DISTRIBUTIONS IN SPHERICAL GEOMETRIES FOR CREATING ASTROPHYSICAL INITIAL CONDITIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raskin, Cody; Owen, J. Michael

    2016-04-01

    Creating spherical initial conditions in smoothed particle hydrodynamics simulations that are spherically conformal is a difficult task. Here, we describe two algorithmic methods for evenly distributing points on surfaces that when paired can be used to build three-dimensional spherical objects with optimal equipartition of volume between particles, commensurate with an arbitrary radial density function. We demonstrate the efficacy of our method against stretched lattice arrangements on the metrics of hydrodynamic stability, spherical conformity, and the harmonic power distribution of gravitational settling oscillations. We further demonstrate how our method is highly optimized for simulating multi-material spheres, such as planets with core–mantlemore » boundaries.« less

  1. A comprehensive study of MPI parallelism in three-dimensional discrete element method (DEM) simulation of complex-shaped granular particles

    NASA Astrophysics Data System (ADS)

    Yan, Beichuan; Regueiro, Richard A.

    2018-02-01

    A three-dimensional (3D) DEM code for simulating complex-shaped granular particles is parallelized using message-passing interface (MPI). The concepts of link-block, ghost/border layer, and migration layer are put forward for design of the parallel algorithm, and theoretical scalability function of 3-D DEM scalability and memory usage is derived. Many performance-critical implementation details are managed optimally to achieve high performance and scalability, such as: minimizing communication overhead, maintaining dynamic load balance, handling particle migrations across block borders, transmitting C++ dynamic objects of particles between MPI processes efficiently, eliminating redundant contact information between adjacent MPI processes. The code executes on multiple US Department of Defense (DoD) supercomputers and tests up to 2048 compute nodes for simulating 10 million three-axis ellipsoidal particles. Performance analyses of the code including speedup, efficiency, scalability, and granularity across five orders of magnitude of simulation scale (number of particles) are provided, and they demonstrate high speedup and excellent scalability. It is also discovered that communication time is a decreasing function of the number of compute nodes in strong scaling measurements. The code's capability of simulating a large number of complex-shaped particles on modern supercomputers will be of value in both laboratory studies on micromechanical properties of granular materials and many realistic engineering applications involving granular materials.

  2. Petascale turbulence simulation using a highly parallel fast multipole method on GPUs

    NASA Astrophysics Data System (ADS)

    Yokota, Rio; Barba, L. A.; Narumi, Tetsu; Yasuoka, Kenji

    2013-03-01

    This paper reports large-scale direct numerical simulations of homogeneous-isotropic fluid turbulence, achieving sustained performance of 1.08 petaflop/s on GPU hardware using single precision. The simulations use a vortex particle method to solve the Navier-Stokes equations, with a highly parallel fast multipole method (FMM) as numerical engine, and match the current record in mesh size for this application, a cube of 40963 computational points solved with a spectral method. The standard numerical approach used in this field is the pseudo-spectral method, relying on the FFT algorithm as the numerical engine. The particle-based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo-spectral method, using a trusted code. In terms of parallel performance, weak scaling results show the FMM-based vortex method achieving 74% parallel efficiency on 4096 processes (one GPU per MPI process, 3 GPUs per node of the TSUBAME-2.0 system). The FFT-based spectral method is able to achieve just 14% parallel efficiency on the same number of MPI processes (using only CPU cores), due to the all-to-all communication pattern of the FFT algorithm. The calculation time for one time step was 108 s for the vortex method and 154 s for the spectral method, under these conditions. Computing with 69 billion particles, this work exceeds by an order of magnitude the largest vortex-method calculations to date.

  3. A simulator for discrete quantum walks on lattices

    NASA Astrophysics Data System (ADS)

    Rodrigues, J.; Paunković, N.; Mateus, P.

    In this paper, we present a simulator for two-particle quantum walks on the line and one-particle on a two-dimensional squared lattice. It can be used to investigate the equivalence between the two cases (one- and two-particle walks) for various boundary conditions (open, circular, reflecting, absorbing and their combinations). For the case of a single walker on a two-dimensional lattice, the simulator can also implement the Möbius strip. Furthermore, other topologies for the walker are also simulated by the proposed tool, like certain types of planar graphs with degree up to 4, by considering missing links over the lattice. The main purpose of the simulator is to study the genuinely quantum effects on the global properties of the two-particle joint probability distribution on the entanglement between the walkers/axis. For that purpose, the simulator is designed to compute various quantities such as: the entanglement and classical correlations, (classical and quantum) mutual information, the average distance between the two walkers, different hitting times and quantum discord. These quantities are of vital importance in designing possible algorithmic applications of quantum walks, namely in search, 3-SAT problems, etc. The simulator can also implement the static partial measurements of particle(s) positions and dynamic breaking of the links between certain nodes, both of which can be used to investigate the effects of decoherence on the walker(s). Finally, the simulator can be used to investigate the dynamic Anderson-like particle localization by varying the coin operators of certain nodes on the line/lattice. We also present some illustrative and relevant examples of one- and two-particle quantum walks in various scenarios. The tool was implemented in C and is available on-line at http://qwsim.weebly.com/.

  4. A Computational Approach to Increase Time Scales in Brownian Dynamics–Based Reaction-Diffusion Modeling

    PubMed Central

    Frazier, Zachary

    2012-01-01

    Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237

  5. Crystal-structure prediction via the Floppy-Box Monte Carlo algorithm: Method and application to hard (non)convex particles

    NASA Astrophysics Data System (ADS)

    de Graaf, Joost; Filion, Laura; Marechal, Matthieu; van Roij, René; Dijkstra, Marjolein

    2012-12-01

    In this paper, we describe the way to set up the floppy-box Monte Carlo (FBMC) method [L. Filion, M. Marechal, B. van Oorschot, D. Pelt, F. Smallenburg, and M. Dijkstra, Phys. Rev. Lett. 103, 188302 (2009), 10.1103/PhysRevLett.103.188302] to predict crystal-structure candidates for colloidal particles. The algorithm is explained in detail to ensure that it can be straightforwardly implemented on the basis of this text. The handling of hard-particle interactions in the FBMC algorithm is given special attention, as (soft) short-range and semi-long-range interactions can be treated in an analogous way. We also discuss two types of algorithms for checking for overlaps between polyhedra, the method of separating axes and a triangular-tessellation based technique. These can be combined with the FBMC method to enable crystal-structure prediction for systems composed of highly shape-anisotropic particles. Moreover, we present the results for the dense crystal structures predicted using the FBMC method for 159 (non)convex faceted particles, on which the findings in [J. de Graaf, R. van Roij, and M. Dijkstra, Phys. Rev. Lett. 107, 155501 (2011), 10.1103/PhysRevLett.107.155501] were based. Finally, we comment on the process of crystal-structure prediction itself and the choices that can be made in these simulations.

  6. Parametric Quantum Search Algorithm as Quantum Walk: A Quantum Simulation

    NASA Astrophysics Data System (ADS)

    Ellinas, Demosthenes; Konstandakis, Christos

    2016-02-01

    Parametric quantum search algorithm (PQSA) is a form of quantum search that results by relaxing the unitarity of the original algorithm. PQSA can naturally be cast in the form of quantum walk, by means of the formalism of oracle algebra. This is due to the fact that the completely positive trace preserving search map used by PQSA, admits a unitarization (unitary dilation) a la quantum walk, at the expense of introducing auxiliary quantum coin-qubit space. The ensuing QW describes a process of spiral motion, chosen to be driven by two unitary Kraus generators, generating planar rotations of Bloch vector around an axis. The quadratic acceleration of quantum search translates into an equivalent quadratic saving of the number of coin qubits in the QW analogue. The associated to QW model Hamiltonian operator is obtained and is shown to represent a multi-particle long-range interacting quantum system that simulates parametric search. Finally, the relation of PQSA-QW simulator to the QW search algorithm is elucidated.

  7. Object-Oriented/Data-Oriented Design of a Direct Simulation Monte Carlo Algorithm

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2014-01-01

    Over the past decade, there has been much progress towards improved phenomenological modeling and algorithmic updates for the direct simulation Monte Carlo (DSMC) method, which provides a probabilistic physical simulation of gas Rows. These improvements have largely been based on the work of the originator of the DSMC method, Graeme Bird. Of primary importance are improved chemistry, internal energy, and physics modeling and a reduction in time to solution. These allow for an expanded range of possible solutions In altitude and velocity space. NASA's current production code, the DSMC Analysis Code (DAC), is well-established and based on Bird's 1994 algorithms written in Fortran 77 and has proven difficult to upgrade. A new DSMC code is being developed in the C++ programming language using object-oriented and data-oriented design paradigms to facilitate the inclusion of the recent improvements and future development activities. The development efforts on the new code, the Multiphysics Algorithm with Particles (MAP), are described, and performance comparisons are made with DAC.

  8. A novel algorithm for solving the true coincident counting issues in Monte Carlo simulations for radiation spectroscopy.

    PubMed

    Guan, Fada; Johns, Jesse M; Vasudevan, Latha; Zhang, Guoqing; Tang, Xiaobin; Poston, John W; Braby, Leslie A

    2015-06-01

    Coincident counts can be observed in experimental radiation spectroscopy. Accurate quantification of the radiation source requires the detection efficiency of the spectrometer, which is often experimentally determined. However, Monte Carlo analysis can be used to supplement experimental approaches to determine the detection efficiency a priori. The traditional Monte Carlo method overestimates the detection efficiency as a result of omitting coincident counts caused mainly by multiple cascade source particles. In this study, a novel "multi-primary coincident counting" algorithm was developed using the Geant4 Monte Carlo simulation toolkit. A high-purity Germanium detector for ⁶⁰Co gamma-ray spectroscopy problems was accurately modeled to validate the developed algorithm. The simulated pulse height spectrum agreed well qualitatively with the measured spectrum obtained using the high-purity Germanium detector. The developed algorithm can be extended to other applications, with a particular emphasis on challenging radiation fields, such as counting multiple types of coincident radiations released from nuclear fission or used nuclear fuel.

  9. Particle Based Simulations of Complex Systems with MP2C : Hydrodynamics and Electrostatics

    NASA Astrophysics Data System (ADS)

    Sutmann, Godehard; Westphal, Lidia; Bolten, Matthias

    2010-09-01

    Particle based simulation methods are well established paths to explore system behavior on microscopic to mesoscopic time and length scales. With the development of new computer architectures it becomes more and more important to concentrate on local algorithms which do not need global data transfer or reorganisation of large arrays of data across processors. This requirement strongly addresses long-range interactions in particle systems, i.e. mainly hydrodynamic and electrostatic contributions. In this article, emphasis is given to the implementation and parallelization of the Multi-Particle Collision Dynamics method for hydrodynamic contributions and a splitting scheme based on Multigrid for electrostatic contributions. Implementations are done for massively parallel architectures and are demonstrated for the IBM Blue Gene/P architecture Jugene in Jülich.

  10. Supercomputer simulations of structure formation in the Universe

    NASA Astrophysics Data System (ADS)

    Ishiyama, Tomoaki

    2017-06-01

    We describe the implementation and performance results of our massively parallel MPI†/OpenMP‡ hybrid TreePM code for large-scale cosmological N-body simulations. For domain decomposition, a recursive multi-section algorithm is used and the size of domains are automatically set so that the total calculation time is the same for all processes. We developed a highly-tuned gravity kernel for short-range forces, and a novel communication algorithm for long-range forces. For two trillion particles benchmark simulation, the average performance on the fullsystem of K computer (82,944 nodes, the total number of core is 663,552) is 5.8 Pflops, which corresponds to 55% of the peak speed.

  11. Modeling and simulation of dust behaviors behind a moving vehicle

    NASA Astrophysics Data System (ADS)

    Wang, Jingfang

    Simulation of physically realistic complex dust behaviors is a difficult and attractive problem in computer graphics. A fast, interactive and visually convincing model of dust behaviors behind moving vehicles is very useful in computer simulation, training, education, art, advertising, and entertainment. In my dissertation, an experimental interactive system has been implemented for the simulation of dust behaviors behind moving vehicles. The system includes physically-based models, particle systems, rendering engines and graphical user interface (GUI). I have employed several vehicle models including tanks, cars, and jeeps to test and simulate in different scenarios and conditions. Calm weather, winding condition, vehicle turning left or right, and vehicle simulation controlled by users from the GUI are all included. I have also tested the factors which play against the physical behaviors and graphics appearances of the dust particles through GUI or off-line scripts. The simulations are done on a Silicon Graphics Octane station. The animation of dust behaviors is achieved by physically-based modeling and simulation. The flow around a moving vehicle is modeled using computational fluid dynamics (CFD) techniques. I implement a primitive variable and pressure-correction approach to solve the three dimensional incompressible Navier Stokes equations in a volume covering the moving vehicle. An alternating- direction implicit (ADI) method is used for the solution of the momentum equations, with a successive-over- relaxation (SOR) method for the solution of the Poisson pressure equation. Boundary conditions are defined and simplified according to their dynamic properties. The dust particle dynamics is modeled using particle systems, statistics, and procedure modeling techniques. Graphics and real-time simulation techniques, such as dynamics synchronization, motion blur, blending, and clipping have been employed in the rendering to achieve realistic appearing dust behaviors. In addition, I introduce a temporal smoothing technique to eliminate the jagged effect caused by large simulation time. Several algorithms are used to speed up the simulation. For example, pre-calculated tables and display lists are created to replace some of the most commonly used functions, scripts and processes. The performance study shows that both time and space costs of the algorithms are linear in the number of particles in the system. On a Silicon Graphics Octane, three vehicles with 20,000 particles run at 6-8 frames per second on average. This speed does not include the extra calculations of convergence of the numerical integration for fluid dynamics which usually takes about 4-5 minutes to achieve steady state.

  12. An implementation of a tree code on a SIMD, parallel computer

    NASA Technical Reports Server (NTRS)

    Olson, Kevin M.; Dorband, John E.

    1994-01-01

    We describe a fast tree algorithm for gravitational N-body simulation on SIMD parallel computers. The tree construction uses fast, parallel sorts. The sorted lists are recursively divided along their x, y and z coordinates. This data structure is a completely balanced tree (i.e., each particle is paired with exactly one other particle) and maintains good spatial locality. An implementation of this tree-building algorithm on a 16k processor Maspar MP-1 performs well and constitutes only a small fraction (approximately 15%) of the entire cycle of finding the accelerations. Each node in the tree is treated as a monopole. The tree search and the summation of accelerations also perform well. During the tree search, node data that is needed from another processor is simply fetched. Roughly 55% of the tree search time is spent in communications between processors. We apply the code to two problems of astrophysical interest. The first is a simulation of the close passage of two gravitationally, interacting, disk galaxies using 65,636 particles. We also simulate the formation of structure in an expanding, model universe using 1,048,576 particles. Our code attains speeds comparable to one head of a Cray Y-MP, so single instruction, multiple data (SIMD) type computers can be used for these simulations. The cost/performance ratio for SIMD machines like the Maspar MP-1 make them an extremely attractive alternative to either vector processors or large multiple instruction, multiple data (MIMD) type parallel computers. With further optimizations (e.g., more careful load balancing), speeds in excess of today's vector processing computers should be possible.

  13. Simulations of Coulomb systems with slab geometry using an efficient 3D Ewald summation method

    NASA Astrophysics Data System (ADS)

    dos Santos, Alexandre P.; Girotto, Matheus; Levin, Yan

    2016-04-01

    We present a new approach to efficiently simulate electrolytes confined between infinite charged walls using a 3d Ewald summation method. The optimal performance is achieved by separating the electrostatic potential produced by the charged walls from the electrostatic potential of electrolyte. The electric field produced by the 3d periodic images of the walls is constant inside the simulation cell, with the field produced by the transverse images of the charged plates canceling out. The non-neutral confined electrolyte in an external potential can be simulated using 3d Ewald summation with a suitable renormalization of the electrostatic energy, to remove a divergence, and a correction that accounts for the conditional convergence of the resulting lattice sum. The new algorithm is at least an order of magnitude more rapid than the usual simulation methods for the slab geometry and can be further sped up by adopting a particle-particle particle-mesh approach.

  14. A comprehensive simulation framework for imaging single particles and biomolecules at the European X-ray Free-Electron Laser

    NASA Astrophysics Data System (ADS)

    Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.; Samoylova, Liubov; Buzmakov, Alexey; Jurek, Zoltan; Ziaja, Beata; Santra, Robin; Loh, N. Duane; Tschentscher, Thomas; Mancuso, Adrian P.

    2016-04-01

    The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy and incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. We demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design.

  15. Integrating Geochemical Reactions with a Particle-Tracking Approach to Simulate Nitrogen Transport and Transformation in Aquifers

    NASA Astrophysics Data System (ADS)

    Cui, Z.; Welty, C.; Maxwell, R. M.

    2011-12-01

    Lagrangian, particle-tracking models are commonly used to simulate solute advection and dispersion in aquifers. They are computationally efficient and suffer from much less numerical dispersion than grid-based techniques, especially in heterogeneous and advectively-dominated systems. Although particle-tracking models are capable of simulating geochemical reactions, these reactions are often simplified to first-order decay and/or linear, first-order kinetics. Nitrogen transport and transformation in aquifers involves both biodegradation and higher-order geochemical reactions. In order to take advantage of the particle-tracking approach, we have enhanced an existing particle-tracking code SLIM-FAST, to simulate nitrogen transport and transformation in aquifers. The approach we are taking is a hybrid one: the reactive multispecies transport process is operator split into two steps: (1) the physical movement of the particles including the attachment/detachment to solid surfaces, which is modeled by a Lagrangian random-walk algorithm; and (2) multispecies reactions including biodegradation are modeled by coupling multiple Monod equations with other geochemical reactions. The coupled reaction system is solved by an ordinary differential equation solver. In order to solve the coupled system of equations, after step 1, the particles are converted to grid-based concentrations based on the mass and position of the particles, and after step 2 the newly calculated concentration values are mapped back to particles. The enhanced particle-tracking code is capable of simulating subsurface nitrogen transport and transformation in a three-dimensional domain with variably saturated conditions. Potential application of the enhanced code is to simulate subsurface nitrogen loading to the Chesapeake Bay and its tributaries. Implementation details, verification results of the enhanced code with one-dimensional analytical solutions and other existing numerical models will be presented in addition to a discussion of implementation challenges.

  16. On the characterization of inhomogeneity of the density distribution in supercritical fluids via molecular dynamics simulation and data mining analysis.

    PubMed

    Idrissi, Abdenacer; Vyalov, Ivan; Georgi, Nikolaj; Kiselev, Michael

    2013-10-10

    We combined molecular dynamics simulation and DBSCAN algorithm (Density Based Spatial Clustering of Application with Noise) in order to characterize the local density inhomogeneity distribution in supercritical fluids. The DBSCAN is an algorithm that is capable of finding arbitrarily shaped density domains, where domains are defined as dense regions separated by low-density regions. The inhomogeneity of density domain distributions of Ar system in sub- and supercritical conditions along the 50 bar isobar is associated with the occurrence of a maximum in the fluctuation of number of particles of the density domains. This maximum coincides with the temperature, Tα, at which the thermal expansion occurs. Furthermore, using Voronoi polyhedral analysis, we characterized the structure of the density domains. The results show that with increasing temperature below Tα, the increase of the inhomogeneity is mainly associated with the density fluctuation of the border particles of the density domains, while with increasing temperature above Tα, the decrease of the inhomogeneity is associated with the core particles.

  17. An efficient quasi-3D particle tracking-based approach for transport through fractures with application to dynamic dispersion calculation.

    PubMed

    Wang, Lichun; Cardenas, M Bayani

    2015-08-01

    The quantitative study of transport through fractured media has continued for many decades, but has often been constrained by observational and computational challenges. Here, we developed an efficient quasi-3D random walk particle tracking (RWPT) algorithm to simulate solute transport through natural fractures based on a 2D flow field generated from the modified local cubic law (MLCL). As a reference, we also modeled the actual breakthrough curves (BTCs) through direct simulations with the 3D advection-diffusion equation (ADE) and Navier-Stokes equations. The RWPT algorithm along with the MLCL accurately reproduced the actual BTCs calculated with the 3D ADE. The BTCs exhibited non-Fickian behavior, including early arrival and long tails. Using the spatial information of particle trajectories, we further analyzed the dynamic dispersion process through moment analysis. From this, asymptotic time scales were determined for solute dispersion to distinguish non-Fickian from Fickian regimes. This analysis illustrates the advantage and benefit of using an efficient combination of flow modeling and RWPT. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Particle-in-cell simulations on graphic processing units

    NASA Astrophysics Data System (ADS)

    Ren, C.; Zhou, X.; Li, J.; Huang, M. C.; Zhao, Y.

    2014-10-01

    We will show our recent progress in using GPU's to accelerate the PIC code OSIRIS [Fonseca et al. LNCS 2331, 342 (2002)]. The OISRIS parallel structure is retained and the computation-intensive kernels are shipped to GPU's. Algorithms for the kernels are adapted for the GPU, including high-order charge-conserving current deposition schemes with few branching and parallel particle sorting [Kong et al., JCP 230, 1676 (2011)]. These algorithms make efficient use of the GPU shared memory. This work was supported by U.S. Department of Energy under Grant No. DE-FC02-04ER54789 and by NSF under Grant No. PHY-1314734.

  19. Axisymmetric charge-conservative electromagnetic particle simulation algorithm on unstructured grids: Application to microwave vacuum electronic devices

    NASA Astrophysics Data System (ADS)

    Na, Dong-Yeop; Omelchenko, Yuri A.; Moon, Haksu; Borges, Ben-Hur V.; Teixeira, Fernando L.

    2017-10-01

    We present a charge-conservative electromagnetic particle-in-cell (EM-PIC) algorithm optimized for the analysis of vacuum electronic devices (VEDs) with cylindrical symmetry (axisymmetry). We exploit the axisymmetry present in the device geometry, fields, and sources to reduce the dimensionality of the problem from 3D to 2D. Further, we employ 'transformation optics' principles to map the original problem in polar coordinates with metric tensor diag (1 ,ρ2 , 1) to an equivalent problem on a Cartesian metric tensor diag (1 , 1 , 1) with an effective (artificial) inhomogeneous medium introduced. The resulting problem in the meridian (ρz) plane is discretized using an unstructured 2D mesh considering TEϕ-polarized fields. Electromagnetic field and source (node-based charges and edge-based currents) variables are expressed as differential forms of various degrees, and discretized using Whitney forms. Using leapfrog time integration, we obtain a mixed E - B finite-element time-domain scheme for the full-discrete Maxwell's equations. We achieve a local and explicit time update for the field equations by employing the sparse approximate inverse (SPAI) algorithm. Interpolating field values to particles' positions for solving Newton-Lorentz equations of motion is also done via Whitney forms. Particles are advanced using the Boris algorithm with relativistic correction. A recently introduced charge-conserving scatter scheme tailored for 2D unstructured grids is used in the scatter step. The algorithm is validated considering cylindrical cavity and space-charge-limited cylindrical diode problems. We use the algorithm to investigate the physical performance of VEDs designed to harness particle bunching effects arising from the coherent (resonance) Cerenkov electron beam interactions within micro-machined slow wave structures.

  20. A fast random walk algorithm for computing the pulsed-gradient spin-echo signal in multiscale porous media.

    PubMed

    Grebenkov, Denis S

    2011-02-01

    A new method for computing the signal attenuation due to restricted diffusion in a linear magnetic field gradient is proposed. A fast random walk (FRW) algorithm for simulating random trajectories of diffusing spin-bearing particles is combined with gradient encoding. As random moves of a FRW are continuously adapted to local geometrical length scales, the method is efficient for simulating pulsed-gradient spin-echo experiments in hierarchical or multiscale porous media such as concrete, sandstones, sedimentary rocks and, potentially, brain or lungs. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. Development of hardware accelerator for molecular dynamics simulations: a computation board that calculates nonbonded interactions in cooperation with fast multipole method.

    PubMed

    Amisaki, Takashi; Toyoda, Shinjiro; Miyagawa, Hiroh; Kitamura, Kunihiro

    2003-04-15

    Evaluation of long-range Coulombic interactions still represents a bottleneck in the molecular dynamics (MD) simulations of biological macromolecules. Despite the advent of sophisticated fast algorithms, such as the fast multipole method (FMM), accurate simulations still demand a great amount of computation time due to the accuracy/speed trade-off inherently involved in these algorithms. Unless higher order multipole expansions, which are extremely expensive to evaluate, are employed, a large amount of the execution time is still spent in directly calculating particle-particle interactions within the nearby region of each particle. To reduce this execution time for pair interactions, we developed a computation unit (board), called MD-Engine II, that calculates nonbonded pairwise interactions using a specially designed hardware. Four custom arithmetic-processors and a processor for memory manipulation ("particle processor") are mounted on the computation board. The arithmetic processors are responsible for calculation of the pair interactions. The particle processor plays a central role in realizing efficient cooperation with the FMM. The results of a series of 50-ps MD simulations of a protein-water system (50,764 atoms) indicated that a more stringent setting of accuracy in FMM computation, compared with those previously reported, was required for accurate simulations over long time periods. Such a level of accuracy was efficiently achieved using the cooperative calculations of the FMM and MD-Engine II. On an Alpha 21264 PC, the FMM computation at a moderate but tolerable level of accuracy was accelerated by a factor of 16.0 using three boards. At a high level of accuracy, the cooperative calculation achieved a 22.7-fold acceleration over the corresponding conventional FMM calculation. In the cooperative calculations of the FMM and MD-Engine II, it was possible to achieve more accurate computation at a comparable execution time by incorporating larger nearby regions. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 582-592, 2003

  2. Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems.

    PubMed

    Huang, Shuqiang; Tao, Ming

    2017-01-22

    Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K -center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms.

  3. Wireless Sensor Network Congestion Control Based on Standard Particle Swarm Optimization and Single Neuron PID

    PubMed Central

    Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong

    2018-01-01

    Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm–neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS. PMID:29671822

  4. Wireless Sensor Network Congestion Control Based on Standard Particle Swarm Optimization and Single Neuron PID.

    PubMed

    Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong

    2018-04-19

    Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm⁻neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS.

  5. Autonomous sensor manager agents (ASMA)

    NASA Astrophysics Data System (ADS)

    Osadciw, Lisa A.

    2004-04-01

    Autonomous sensor manager agents are presented as an algorithm to perform sensor management within a multisensor fusion network. The design of the hybrid ant system/particle swarm agents is described in detail with some insight into their performance. Although the algorithm is designed for the general sensor management problem, a simulation example involving 2 radar systems is presented. Algorithmic parameters are determined by the size of the region covered by the sensor network, the number of sensors, and the number of parameters to be selected. With straight forward modifications, this algorithm can be adapted for most sensor management problems.

  6. A backward Monte Carlo method for efficient computation of runaway probabilities in runaway electron simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Guannan; Del-Castillo-Negrete, Diego

    2017-10-01

    Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.

  7. Comparison of particle swarm optimization and simulated annealing for locating additional boreholes considering combined variance minimization

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi

    2016-10-01

    One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.

  8. Effect of finite particle number sampling on baryon number fluctuations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinheimer, Jan; Koch, Volker

    The effects of finite particle number sampling on the net baryon number cumulants, extracted from fluid dynamical simulations, are studied. The commonly used finite particle number sampling procedure introduces an additional Poissonian (or multinomial if global baryon number conservation is enforced) contribution which increases the extracted moments of the baryon number distribution. If this procedure is applied to a fluctuating fluid dynamics framework, one severely overestimates the actual cumulants. We show that the sampling of so-called test particles suppresses the additional contribution to the moments by at least one power of the number of test particles. We demonstrate this methodmore » in a numerical fluid dynamics simulation that includes the effects of spinodal decomposition due to a first-order phase transition. Furthermore, in the limit where antibaryons can be ignored, we derive analytic formulas which capture exactly the effect of particle sampling on the baryon number cumulants. These formulas may be used to test the various numerical particle sampling algorithms.« less

  9. Effect of finite particle number sampling on baryon number fluctuations

    DOE PAGES

    Steinheimer, Jan; Koch, Volker

    2017-09-28

    The effects of finite particle number sampling on the net baryon number cumulants, extracted from fluid dynamical simulations, are studied. The commonly used finite particle number sampling procedure introduces an additional Poissonian (or multinomial if global baryon number conservation is enforced) contribution which increases the extracted moments of the baryon number distribution. If this procedure is applied to a fluctuating fluid dynamics framework, one severely overestimates the actual cumulants. We show that the sampling of so-called test particles suppresses the additional contribution to the moments by at least one power of the number of test particles. We demonstrate this methodmore » in a numerical fluid dynamics simulation that includes the effects of spinodal decomposition due to a first-order phase transition. Furthermore, in the limit where antibaryons can be ignored, we derive analytic formulas which capture exactly the effect of particle sampling on the baryon number cumulants. These formulas may be used to test the various numerical particle sampling algorithms.« less

  10. Brownian dynamics simulation of rigid particles of arbitrary shape in external fields.

    PubMed

    Fernandes, Miguel X; de la Torre, José García

    2002-12-01

    We have developed a Brownian dynamics simulation algorithm to generate Brownian trajectories of an isolated, rigid particle of arbitrary shape in the presence of electric fields or any other external agents. Starting from the generalized diffusion tensor, which can be calculated with the existing HYDRO software, the new program BROWNRIG (including a case-specific subprogram for the external agent) carries out a simulation that is analyzed later to extract the observable dynamic properties. We provide a variety of examples of utilization of this method, which serve as tests of its performance, and also illustrate its applicability. Examples include free diffusion, transport in an electric field, and diffusion in a restricting environment.

  11. High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and Delaunay Tessellation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterka, Tom; Morozov, Dmitriy; Phillips, Carolyn

    2014-11-14

    Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets: N-body simulations, molecular dynamics codes, and LIDAR point clouds are just a few examples. Such computational geometry methods are common in data analysis and visualization; but as the scale of simulations and observations surpasses billions of particles, the existing serial and shared-memory algorithms no longer suffice. A distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this paper is a new parallel Delaunay and Voronoi tessellation algorithm that automatically determines which neighbormore » points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include periodic and wall boundary conditions, comparison of our method using two popular serial libraries, and application to numerous science datasets.« less

  12. A low-dispersion, exactly energy-charge-conserving semi-implicit relativistic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Guangye; Luis, Chacon; Bird, Robert; Stark, David; Yin, Lin; Albright, Brian

    2017-10-01

    Leap-frog based explicit algorithms, either ``energy-conserving'' or ``momentum-conserving'', do not conserve energy discretely. Time-centered fully implicit algorithms can conserve discrete energy exactly, but introduce large dispersion errors in the light-wave modes, regardless of timestep sizes. This can lead to intolerable simulation errors where highly accurate light propagation is needed (e.g. laser-plasma interactions, LPI). In this study, we selectively combine the leap-frog and Crank-Nicolson methods to produce a low-dispersion, exactly energy-and-charge-conserving PIC algorithm. Specifically, we employ the leap-frog method for Maxwell equations, and the Crank-Nicolson method for particle equations. Such an algorithm admits exact global energy conservation, exact local charge conservation, and preserves the dispersion properties of the leap-frog method for the light wave. The algorithm has been implemented in a code named iVPIC, based on the VPIC code developed at LANL. We will present numerical results that demonstrate the properties of the scheme with sample test problems (e.g. Weibel instability run for 107 timesteps, and LPI applications.

  13. Iterative load-balancing method with multigrid level relaxation for particle simulation with short-range interactions

    NASA Astrophysics Data System (ADS)

    Furuichi, Mikito; Nishiura, Daisuke

    2017-10-01

    We developed dynamic load-balancing algorithms for Particle Simulation Methods (PSM) involving short-range interactions, such as Smoothed Particle Hydrodynamics (SPH), Moving Particle Semi-implicit method (MPS), and Discrete Element method (DEM). These are needed to handle billions of particles modeled in large distributed-memory computer systems. Our method utilizes flexible orthogonal domain decomposition, allowing the sub-domain boundaries in the column to be different for each row. The imbalances in the execution time between parallel logical processes are treated as a nonlinear residual. Load-balancing is achieved by minimizing the residual within the framework of an iterative nonlinear solver, combined with a multigrid technique in the local smoother. Our iterative method is suitable for adjusting the sub-domain frequently by monitoring the performance of each computational process because it is computationally cheaper in terms of communication and memory costs than non-iterative methods. Numerical tests demonstrated the ability of our approach to handle workload imbalances arising from a non-uniform particle distribution, differences in particle types, or heterogeneous computer architecture which was difficult with previously proposed methods. We analyzed the parallel efficiency and scalability of our method using Earth simulator and K-computer supercomputer systems.

  14. A 2D electrostatic PIC code for the Mark III Hypercube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferraro, R.D.; Liewer, P.C.; Decyk, V.K.

    We have implemented a 2D electrostastic plasma particle in cell (PIC) simulation code on the Caltech/JPL Mark IIIfp Hypercube. The code simulates plasma effects by evolving in time the trajectories of thousands to millions of charged particles subject to their self-consistent fields. Each particle`s position and velocity is advanced in time using a leap frog method for integrating Newton`s equations of motion in electric and magnetic fields. The electric field due to these moving charged particles is calculated on a spatial grid at each time by solving Poisson`s equation in Fourier space. These two tasks represent the largest part ofmore » the computation. To obtain efficient operation on a distributed memory parallel computer, we are using the General Concurrent PIC (GCPIC) algorithm previously developed for a 1D parallel PIC code.« less

  15. MaMiCo: Transient multi-instance molecular-continuum flow simulation on supercomputers

    NASA Astrophysics Data System (ADS)

    Neumann, Philipp; Bian, Xin

    2017-11-01

    We present extensions of the macro-micro-coupling tool MaMiCo, which was designed to couple continuum fluid dynamics solvers with discrete particle dynamics. To enable local extraction of smooth flow field quantities especially on rather short time scales, sampling over an ensemble of molecular dynamics simulations is introduced. We provide details on these extensions including the transient coupling algorithm, open boundary forcing, and multi-instance sampling. Furthermore, we validate the coupling in Couette flow using different particle simulation software packages and particle models, i.e. molecular dynamics and dissipative particle dynamics. Finally, we demonstrate the parallel scalability of the molecular-continuum simulations by using up to 65 536 compute cores of the supercomputer Shaheen II located at KAUST. Program Files doi:http://dx.doi.org/10.17632/w7rgdrhb85.1 Licensing provisions: BSD 3-clause Programming language: C, C++ External routines/libraries: For compiling: SCons, MPI (optional) Subprograms used: ESPResSo, LAMMPS, ls1 mardyn, waLBerla For installation procedures of the MaMiCo interfaces, see the README files in the respective code directories located in coupling/interface/impl. Journal reference of previous version: P. Neumann, H. Flohr, R. Arora, P. Jarmatz, N. Tchipev, H.-J. Bungartz. MaMiCo: Software design for parallel molecular-continuum flow simulations, Computer Physics Communications 200: 324-335, 2016 Does the new version supersede the previous version?: Yes. The functionality of the previous version is completely retained in the new version. Nature of problem: Coupled molecular-continuum simulation for multi-resolution fluid dynamics: parts of the domain are resolved by molecular dynamics or another particle-based solver whereas large parts are covered by a mesh-based CFD solver, e.g. a lattice Boltzmann automaton. Solution method: We couple existing MD and CFD solvers via MaMiCo (macro-micro coupling tool). Data exchange and coupling algorithmics are abstracted and incorporated in MaMiCo. Once an algorithm is set up in MaMiCo, it can be used and extended, even if other solvers are used (as soon as the respective interfaces are implemented/available). Reasons for the new version: We have incorporated a new algorithm to simulate transient molecular-continuum systems and to automatically sample data over multiple MD runs that can be executed simultaneously (on, e.g., a compute cluster). MaMiCo has further been extended by an interface to incorporate boundary forcing to account for open molecular dynamics boundaries. Besides support for coupling with various MD and CFD frameworks, the new version contains a test case that allows to run molecular-continuum Couette flow simulations out-of-the-box. No external tools or simulation codes are required anymore. However, the user is free to switch from the included MD simulation package to LAMMPS. For details on how to run the transient Couette problem, see the file README in the folder coupling/tests, Remark on MaMiCo V1.1. Summary of revisions: Open boundary forcing; Multi-instance MD sampling; support for transient molecular-continuum systems Restrictions: Currently, only single-centered systems are supported. For access to the LAMMPS-based implementation of DPD boundary forcing, please contact Xin Bian, xin.bian@tum.de. Additional comments: Please see file license_mamico.txt for further details regarding distribution and advertising of this software.

  16. The 2nd Symposium on the Frontiers of Massively Parallel Computations

    NASA Technical Reports Server (NTRS)

    Mills, Ronnie (Editor)

    1988-01-01

    Programming languages, computer graphics, neural networks, massively parallel computers, SIMD architecture, algorithms, digital terrain models, sort computation, simulation of charged particle transport on the massively parallel processor and image processing are among the topics discussed.

  17. Observation and Control of Hamiltonian Chaos in Wave-particle Interaction

    NASA Astrophysics Data System (ADS)

    Doveil, F.; Elskens, Y.; Ruzzon, A.

    2010-11-01

    Wave-particle interactions are central in plasma physics. The paradigm beam-plasma system can be advantageously replaced by a traveling wave tube (TWT) to allow their study in a much less noisy environment. This led to detailed analysis of the self-consistent interaction between unstable waves and an either cold or warm electron beam. More recently a test cold beam has been used to observe its interaction with externally excited wave(s). This allowed observing the main features of Hamiltonian chaos and testing a new method to efficiently channel chaotic transport in phase space. To simulate accurately and efficiently the particle dynamics in the TWT and other 1D particle-wave systems, a new symplectic, symmetric, second order numerical algorithm is developed, using particle position as the independent variable, with a fixed spatial step. This contribution reviews : presentation of the TWT and its connection to plasma physics, resonant interaction of a charged particle in electrostatic waves, observation of particle trapping and transition to chaos, test of control of chaos, and description of the simulation algorithm. The velocity distribution function of the electron beam is recorded with a trochoidal energy analyzer at the output of the TWT. An arbitrary waveform generator is used to launch a prescribed spectrum of waves along the 4m long helix of the TWT. The nonlinear synchronization of particles by a single wave, responsible for Landau damping, is observed. We explore the resonant velocity domain associated with a single wave as well as the transition to large scale chaos when the resonant domains of two waves and their secondary resonances overlap. This transition exhibits a devil's staircase behavior when increasing the excitation level in agreement with numerical simulation. A new strategy for control of chaos by building barriers of transport in phase space as well as its robustness is successfully tested. The underlying concepts extend far beyond the field of electron devices and plasma physics.

  18. Analysis of wind-blown sand movement over transverse dunes.

    PubMed

    Jiang, Hong; Huang, Ning; Zhu, Yuanjian

    2014-12-01

    Wind-blown sand movement often occurs in a very complicated desert environment where sand dunes and ripples are the basic forms. However, most current studies on the theoretic and numerical models of wind-blown sand movement only consider ideal conditions such as steady wind velocity, flat sand surface, etc. In fact, the windward slope gradient plays a great role in the lift-off and sand particle saltation. In this paper, we propose a numerical model for the coupling effect between wind flow and saltating sand particles to simulate wind-blown sand movement over the slope surface and use the SIMPLE algorithm to calculate wind flow and simulate sands transport by tracking sand particle trajectories. We furthermore compare the result of numerical simulation with wind tunnel experiments. These results prove that sand particles have obvious effect on wind flow, especially that over the leeward slope. This study is a preliminary study on windblown sand movement in a complex terrain, and is of significance in the control of dust storms and land desertification.

  19. Analysis of Wind-blown Sand Movement over Transverse Dunes

    PubMed Central

    Jiang, Hong; Huang, Ning; Zhu, Yuanjian

    2014-01-01

    Wind-blown sand movement often occurs in a very complicated desert environment where sand dunes and ripples are the basic forms. However, most current studies on the theoretic and numerical models of wind-blown sand movement only consider ideal conditions such as steady wind velocity, flat sand surface, etc. In fact, the windward slope gradient plays a great role in the lift-off and sand particle saltation. In this paper, we propose a numerical model for the coupling effect between wind flow and saltating sand particles to simulate wind-blown sand movement over the slope surface and use the SIMPLE algorithm to calculate wind flow and simulate sands transport by tracking sand particle trajectories. We furthermore compare the result of numerical simulation with wind tunnel experiments. These results prove that sand particles have obvious effect on wind flow, especially that over the leeward slope. This study is a preliminary study on windblown sand movement in a complex terrain, and is of significance in the control of dust storms and land desertification. PMID:25434372

  20. A Globally Optimal Particle Tracking Technique for Stereo Imaging Velocimetry Experiments

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2008-01-01

    An important phase of any Stereo Imaging Velocimetry experiment is particle tracking. Particle tracking seeks to identify and characterize the motion of individual particles entrained in a fluid or air experiment. We analyze a cylindrical chamber filled with water and seeded with density-matched particles. In every four-frame sequence, we identify a particle track by assigning a unique track label for each camera image. The conventional approach to particle tracking is to use an exhaustive tree-search method utilizing greedy algorithms to reduce search times. However, these types of algorithms are not optimal due to a cascade effect of incorrect decisions upon adjacent tracks. We examine the use of a guided evolutionary neural net with simulated annealing to arrive at a globally optimal assignment of tracks. The net is guided both by the minimization of the search space through the use of prior limiting assumptions about valid tracks and by a strategy which seeks to avoid high-energy intermediate states which can trap the net in a local minimum. A stochastic search algorithm is used in place of back-propagation of error to further reduce the chance of being trapped in an energy well. Global optimization is achieved by minimizing an objective function, which includes both track smoothness and particle-image utilization parameters. In this paper we describe our model and present our experimental results. We compare our results with a nonoptimizing, predictive tracker and obtain an average increase in valid track yield of 27 percent

  1. Directly Reconstructing Principal Components of Heterogeneous Particles from Cryo-EM Images

    PubMed Central

    Tagare, Hemant D.; Kucukelbir, Alp; Sigworth, Fred J.; Wang, Hongwei; Rao, Murali

    2015-01-01

    Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the (posterior) likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the inluenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. PMID:26049077

  2. Identification of robust adaptation gene regulatory network parameters using an improved particle swarm optimization algorithm.

    PubMed

    Huang, X N; Ren, H P

    2016-05-13

    Robust adaptation is a critical ability of gene regulatory network (GRN) to survive in a fluctuating environment, which represents the system responding to an input stimulus rapidly and then returning to its pre-stimulus steady state timely. In this paper, the GRN is modeled using the Michaelis-Menten rate equations, which are highly nonlinear differential equations containing 12 undetermined parameters. The robust adaption is quantitatively described by two conflicting indices. To identify the parameter sets in order to confer the GRNs with robust adaptation is a multi-variable, multi-objective, and multi-peak optimization problem, which is difficult to acquire satisfactory solutions especially high-quality solutions. A new best-neighbor particle swarm optimization algorithm is proposed to implement this task. The proposed algorithm employs a Latin hypercube sampling method to generate the initial population. The particle crossover operation and elitist preservation strategy are also used in the proposed algorithm. The simulation results revealed that the proposed algorithm could identify multiple solutions in one time running. Moreover, it demonstrated a superior performance as compared to the previous methods in the sense of detecting more high-quality solutions within an acceptable time. The proposed methodology, owing to its universality and simplicity, is useful for providing the guidance to design GRN with superior robust adaptation.

  3. Systematic dimensionality reduction for continuous-time quantum walks of interacting fermions

    NASA Astrophysics Data System (ADS)

    Izaac, J. A.; Wang, J. B.

    2017-09-01

    To extend the continuous-time quantum walk (CTQW) to simulate P distinguishable particles on a graph G composed of N vertices, the Hamiltonian of the system is expanded to act on an NP-dimensional Hilbert space, in effect, simulating the multiparticle CTQW on graph G via a single-particle CTQW propagating on the Cartesian graph product G□P. The properties of the Cartesian graph product have been well studied, and classical simulation of multiparticle CTQWs are common in the literature. However, the above approach is generally applied as is when simulating indistinguishable particles, with the particle statistics then applied to the propagated NP state vector to determine walker probabilities. We address the following question: How can we modify the underlying graph structure G□P in order to simulate multiple interacting fermionic CTQWs with a reduction in the size of the state space? In this paper, we present an algorithm for systematically removing "redundant" and forbidden quantum states from consideration, which provides a significant reduction in the effective dimension of the Hilbert space of the fermionic CTQW. As a result, as the number of interacting fermions in the system increases, the classical computational resources required no longer increases exponentially for fixed N .

  4. Computer Simulation of Fracture in Aerogels

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2006-01-01

    Aerogels are of interest to the aerospace community primarily for their thermal properties, notably their low thermal conductivities. While the gels are typically fragile, recent advances in the application of conformal polymer layers to these gels has made them potentially useful as lightweight structural materials as well. In this work, we investigate the strength and fracture behavior of silica aerogels using a molecular statics-based computer simulation technique. The gels' structure is simulated via a Diffusion Limited Cluster Aggregation (DLCA) algorithm, which produces fractal structures representing experimentally observed aggregates of so-called secondary particles, themselves composed of amorphous silica primary particles an order of magnitude smaller. We have performed multi-length-scale simulations of fracture in silica aerogels, in which the interaction b e e n two secondary particles is assumed to be described by a Morse pair potential parameterized such that the potential range is much smaller than the secondary particle size. These Morse parameters are obtained by atomistic simulation of models of the experimentally-observed amorphous silica "bridges," with the fracture behavior of these bridges modeled via molecular statics using a Morse/Coulomb potential for silica. We consider the energetics of the fracture, and compare qualitative features of low-and high-density gel fracture.

  5. Parallel Fokker–Planck-DSMC algorithm for rarefied gas flow simulation in complex domains at all Knudsen numbers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Küchlin, Stephan, E-mail: kuechlin@ifd.mavt.ethz.ch; Jenny, Patrick

    2017-01-01

    A major challenge for the conventional Direct Simulation Monte Carlo (DSMC) technique lies in the fact that its computational cost becomes prohibitive in the near continuum regime, where the Knudsen number (Kn)—characterizing the degree of rarefaction—becomes small. In contrast, the Fokker–Planck (FP) based particle Monte Carlo scheme allows for computationally efficient simulations of rarefied gas flows in the low and intermediate Kn regime. The Fokker–Planck collision operator—instead of performing binary collisions employed by the DSMC method—integrates continuous stochastic processes for the phase space evolution in time. This allows for time step and grid cell sizes larger than the respective collisionalmore » scales required by DSMC. Dynamically switching between the FP and the DSMC collision operators in each computational cell is the basis of the combined FP-DSMC method, which has been proven successful in simulating flows covering the whole Kn range. Until recently, this algorithm had only been applied to two-dimensional test cases. In this contribution, we present the first general purpose implementation of the combined FP-DSMC method. Utilizing both shared- and distributed-memory parallelization, this implementation provides the capability for simulations involving many particles and complex geometries by exploiting state of the art computer cluster technologies.« less

  6. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    DOE PAGES

    Acciarri, R.; Adams, C.; An, R.; ...

    2017-03-14

    Here, we present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. Lastly, we also address technical issues that arise when applying this technique to data from a large LArTPCmore » at or near ground level.« less

  7. Radial particle distributions in PARMILA simulation beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boicourt, G.P.

    1984-03-01

    The estimation of beam spill in particle accelerators is becoming of greater importance as higher current designs are being funded. To the present, no numerical method for predicting beam-spill has been available. In this paper, we present an approach to the loss-estimation problem that uses probability distributions fitted to particle-simulation beams. The properties of the PARMILA code's radial particle distribution are discussed, and a broad class of probability distributions are examined to check their ability to fit it. The possibility that the PARMILA distribution is a mixture is discussed, and a fitting distribution consisting of a mixture of two generalizedmore » gamma distributions is found. An efficient algorithm to accomplish the fit is presented. Examples of the relative prediction of beam spill are given. 26 references, 18 figures, 1 table.« less

  8. Application of importance sampling to the computation of large deviations in nonequilibrium processes.

    PubMed

    Kundu, Anupam; Sabhapandit, Sanjib; Dhar, Abhishek

    2011-03-01

    We present an algorithm for finding the probabilities of rare events in nonequilibrium processes. The algorithm consists of evolving the system with a modified dynamics for which the required event occurs more frequently. By keeping track of the relative weight of phase-space trajectories generated by the modified and the original dynamics one can obtain the required probabilities. The algorithm is tested on two model systems of steady-state particle and heat transport where we find a huge improvement from direct simulation methods.

  9. Free-Propagator Reweighting Integrator for Single-Particle Dynamics in Reaction-Diffusion Models of Heterogeneous Protein-Protein Interaction Systems

    PubMed Central

    Hummer, Gerhard

    2015-01-01

    We present a new algorithm for simulating reaction-diffusion equations at single-particle resolution. Our algorithm is designed to be both accurate and simple to implement, and to be applicable to large and heterogeneous systems, including those arising in systems biology applications. We combine the use of the exact Green's function for a pair of reacting particles with the approximate free-diffusion propagator for position updates to particles. Trajectory reweighting in our free-propagator reweighting (FPR) method recovers the exact association rates for a pair of interacting particles at all times. FPR simulations of many-body systems accurately reproduce the theoretically known dynamic behavior for a variety of different reaction types. FPR does not suffer from the loss of efficiency common to other path-reweighting schemes, first, because corrections apply only in the immediate vicinity of reacting particles and, second, because by construction the average weight factor equals one upon leaving this reaction zone. FPR applications include the modeling of pathways and networks of protein-driven processes where reaction rates can vary widely and thousands of proteins may participate in the formation of large assemblies. With a limited amount of bookkeeping necessary to ensure proper association rates for each reactant pair, FPR can account for changes to reaction rates or diffusion constants as a result of reaction events. Importantly, FPR can also be extended to physical descriptions of protein interactions with long-range forces, as we demonstrate here for Coulombic interactions. PMID:26005592

  10. The artificial retina for track reconstruction at the LHC crossing rate

    NASA Astrophysics Data System (ADS)

    Abba, A.; Bedeschi, F.; Citterio, M.; Caponio, F.; Cusimano, A.; Geraci, A.; Marino, P.; Morello, M. J.; Neri, N.; Punzi, G.; Piucci, A.; Ristori, L.; Spinella, F.; Stracka, S.; Tonelli, D.

    2016-04-01

    We present the results of an R&D study for a specialized processor capable of precisely reconstructing events with hundreds of charged-particle tracks in pixel and silicon strip detectors at 40 MHz, thus suitable for processing LHC events at the full crossing frequency. For this purpose we design and test a massively parallel pattern-recognition algorithm, inspired to the current understanding of the mechanisms adopted by the primary visual cortex of mammals in the early stages of visual-information processing. The detailed geometry and charged-particle's activity of a large tracking detector are simulated and used to assess the performance of the artificial retina algorithm. We find that high-quality tracking in large detectors is possible with sub-microsecond latencies when the algorithm is implemented in modern, high-speed, high-bandwidth FPGA devices.

  11. Research of converter transformer fault diagnosis based on improved PSO-BP algorithm

    NASA Astrophysics Data System (ADS)

    Long, Qi; Guo, Shuyong; Li, Qing; Sun, Yong; Li, Yi; Fan, Youping

    2017-09-01

    To overcome those disadvantages that BP (Back Propagation) neural network and conventional Particle Swarm Optimization (PSO) converge at the global best particle repeatedly in early stage and is easy trapped in local optima and with low diagnosis accuracy when being applied in converter transformer fault diagnosis, we come up with the improved PSO-BP neural network to improve the accuracy rate. This algorithm improves the inertia weight Equation by using the attenuation strategy based on concave function to avoid the premature convergence of PSO algorithm and Time-Varying Acceleration Coefficient (TVAC) strategy was adopted to balance the local search and global search ability. At last the simulation results prove that the proposed approach has a better ability in optimizing BP neural network in terms of network output error, global searching performance and diagnosis accuracy.

  12. Testing of the on-board attitude determination and control algorithms for SAMPEX

    NASA Technical Reports Server (NTRS)

    Mccullough, Jon D.; Flatley, Thomas W.; Henretty, Debra A.; Markley, F. Landis; San, Josephine K.

    1993-01-01

    Algorithms for on-board attitude determination and control of the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) have been expanded to include a constant gain Kalman filter for the spacecraft angular momentum, pulse width modulation for the reaction wheel command, an algorithm to avoid pointing the Heavy Ion Large Telescope (HILT) instrument boresight along the spacecraft velocity vector, and the addition of digital sun sensor (DSS) failure detection logic. These improved algorithms were tested in a closed-loop environment for three orbit geometries, one with the sun perpendicular to the orbit plane, and two with the sun near the orbit plane - at Autumnal Equinox and at Winter Solstice. The closed-loop simulator was enhanced and used as a truth model for the control systems' performance evaluation and sensor/actuator contingency analysis. The simulations were performed on a VAX 8830 using a prototype version of the on-board software.

  13. Semi-Lagrangian particle methods for high-dimensional Vlasov-Poisson systems

    NASA Astrophysics Data System (ADS)

    Cottet, Georges-Henri

    2018-07-01

    This paper deals with the implementation of high order semi-Lagrangian particle methods to handle high dimensional Vlasov-Poisson systems. It is based on recent developments in the numerical analysis of particle methods and the paper focuses on specific algorithmic features to handle large dimensions. The methods are tested with uniform particle distributions in particular against a recent multi-resolution wavelet based method on a 4D plasma instability case and a 6D gravitational case. Conservation properties, accuracy and computational costs are monitored. The excellent accuracy/cost trade-off shown by the method opens new perspective for accurate simulations of high dimensional kinetic equations by particle methods.

  14. Enhanced Particle Swarm Optimization Algorithm: Efficient Training of ReaxFF Reactive Force Fields.

    PubMed

    Furman, David; Carmeli, Benny; Zeiri, Yehuda; Kosloff, Ronnie

    2018-06-12

    Particle swarm optimization (PSO) is a powerful metaheuristic population-based global optimization algorithm. However, when it is applied to nonseparable objective functions, its performance on multimodal landscapes is significantly degraded. Here we show that a significant improvement in the search quality and efficiency on multimodal functions can be achieved by enhancing the basic rotation-invariant PSO algorithm with isotropic Gaussian mutation operators. The new algorithm demonstrates superior performance across several nonlinear, multimodal benchmark functions compared with the rotation-invariant PSO algorithm and the well-established simulated annealing and sequential one-parameter parabolic interpolation methods. A search for the optimal set of parameters for the dispersion interaction model in the ReaxFF- lg reactive force field was carried out with respect to accurate DFT-TS calculations. The resulting optimized force field accurately describes the equations of state of several high-energy molecular crystals where such interactions are of crucial importance. The improved algorithm also presents better performance compared to a genetic algorithm optimization method in the optimization of the parameters of a ReaxFF- lg correction model. The computational framework is implemented in a stand-alone C++ code that allows the straightforward development of ReaxFF reactive force fields.

  15. Study of Solid Particle Behavior in High Temperature Gas Flows

    NASA Astrophysics Data System (ADS)

    Majid, A.; Bauder, U.; Stindl, T.; Fertig, M.; Herdrich, G.; Röser, H.-P.

    2009-01-01

    The Euler-Lagrangian approach is used for the simulation of solid particles in hypersonic entry flows. For flow field simulation, the program SINA (Sequential Iterative Non-equilibrium Algorithm) developed at the Institut für Raumfahrtsysteme is used. The model for the effect of the carrier gas on a particle includes drag force and particle heating only. Other parameters like lift Magnus force or damping torque are not taken into account so far. The reverse effect of the particle phase on the gaseous phase is currently neglected. Parametric analysis is done regarding the impact of variation in the physical input conditions like position, velocity, size and material of the particle. Convective heat fluxes onto the surface of the particle and its radiative cooling are discussed. The variation of particle temperature under different conditions is presented. The influence of various input conditions on the trajectory is explained. A semi empirical model for the particle wall interaction is also discussed and the influence of the wall on the particle trajectory with different particle conditions is presented. The heat fluxes onto the wall due to impingement of particles are also computed and compared with the heat fluxes from the gas.

  16. Towards guided data assimilation for operational hydrologic forecasting in the US Tennessee River basin

    NASA Astrophysics Data System (ADS)

    Weerts, A.; Wood, A. W.; Clark, M. P.; Carney, S.; Day, G. N.; Lemans, M.; Sumihar, J.; Newman, A. J.

    2014-12-01

    In the US, the forecasting approach used by the NWS River Forecast Centers and other regional organizations such as the Bonneville Power Administration (BPA) or Tennessee Valley Authority (TVA) has traditionally involved manual model input and state modifications made by forecasters in real-time. This process is time consuming and requires expert knowledge and experience. The benefits of automated data assimilation (DA) as a strategy for avoiding manual modification approaches have been demonstrated in research studies (eg. Seo et al., 2009). This study explores the usage of various ensemble DA algorithms within the operational platform used by TVA. The final goal is to identify a DA algorithm that will guide the manual modification process used by TVA forecasters and realize considerable time gains (without loss of quality or even enhance the quality) within the forecast process. We evaluate the usability of various popular algorithms for DA that have been applied on a limited basis for operational hydrology. To this end, Delft-FEWS was wrapped (via piwebservice) in OpenDA to enable execution of FEWS workflows (and the chained models within these workflows, including SACSMA, UNITHG and LAGK) in a DA framework. Within OpenDA, several filter methods are available. We considered 4 algorithms: particle filter (RRF), Ensemble Kalman Filter and Asynchronous Ensemble Kalman and Particle filter. Retrospective simulation results for one location and algorithm (AEnKF) are illustrated in Figure 1. The initial results are promising. We will present verification results for these methods (and possible more) for a variety of sub basins in the Tennessee River basin. Finally, we will offer recommendations for guided DA based on our results. References Seo, D.-J., L. Cajina, R. Corby and T. Howieson, 2009: Automatic State Updating for Operational Streamflow Forecasting via Variational Data Assimilation, 367, Journal of Hydrology, 255-275. Figure 1. Retrospectively simulated streamflow for the headwater basin above Powell River at Jonesville (red is observed flow, blue is simulated flow without DA, black is simulated flow with DA)

  17. Simulation of enhanced deposition due to magnetic field alignment of ellipsoidal particles in a lung bifurcation.

    PubMed

    Martinez, R C; Roshchenko, A; Minev, P; Finlay, W H

    2013-02-01

    Aerosolized chemotherapy has been recognized as a potential treatment for lung cancer. The challenge of providing sufficient therapeutic effects without reaching dose-limiting toxicity levels hinders the development of aerosolized chemotherapy. This could be mitigated by increasing drug-delivery efficiency with a noninvasive drug-targeting delivery method. The purpose of this study is to use direct numerical simulations to study the resulting local enhancement of deposition due to magnetic field alignment of high aspect ratio particles. High aspect ratio particles were approximated by a rigid ellipsoid with a minor diameter of 0.5 μm and fluid particle density ratio of 1,000. Particle trajectories were calculated by solving the coupled fluid particle equations using an in-house micro-macro grid finite element algorithm based on a previously developed fictitious domain approach. Particle trajectories were simulated in a morphologically realistic geometry modeling a symmetrical terminal bronchiole bifurcation. Flow conditions were steady inspiratory air flow due to typical breathing at 18 L/min. Deposition efficiency was estimated for two different cases: [1] particles aligned with the streamlines and [2] particles with fixed angular orientation simulating the magnetic field alignment of our previous in vitro study. The local enhancement factor defined as the ratio between deposition efficiency of Case [1] and Case [2] was found to be 1.43 and 3.46 for particles with an aspect ratio of 6 and 20, respectively. Results indicate that externally forcing local alignment of high aspect ratio particles can increase local deposition considerably.

  18. Two hybrid compaction algorithms for the layout optimization problem.

    PubMed

    Xiao, Ren-Bin; Xu, Yi-Chun; Amos, Martyn

    2007-01-01

    In this paper we present two new algorithms for the layout optimization problem: this concerns the placement of circular, weighted objects inside a circular container, the two objectives being to minimize imbalance of mass and to minimize the radius of the container. This problem carries real practical significance in industrial applications (such as the design of satellites), as well as being of significant theoretical interest. We present two nature-inspired algorithms for this problem, the first based on simulated annealing, and the second on particle swarm optimization. We compare our algorithms with the existing best-known algorithm, and show that our approaches out-perform it in terms of both solution quality and execution time.

  19. A cellular automaton method to simulate the microstructure and evolution of low-enriched uranium (LEU) U-Mo/Al dispersion type fuel plates

    NASA Astrophysics Data System (ADS)

    Drera, Saleem S.; Hofman, Gerard L.; Kee, Robert J.; King, Jeffrey C.

    2014-10-01

    Low-enriched uranium (LEU) fuel plates for high power materials test reactors (MTR) are composed of nominally spherical uranium-molybdenum (U-Mo) particles within an aluminum matrix. Fresh U-Mo particles typically range between 10 and 100 μm in diameter, with particle volume fractions up to 50%. As the fuel ages, reaction-diffusion processes cause the formation and growth of interaction layers that surround the fuel particles. The growth rate depends upon the temperature and radiation environment. The cellular automaton algorithm described in this paper can synthesize realistic random fuel-particle structures and simulate the growth of the intermetallic interaction layers. Examples in the present paper pack approximately 1000 particles into three-dimensional rectangular fuel structures that are approximately 1 mm on each side. The computational approach is designed to yield synthetic microstructures consistent with images from actual fuel plates and is validated by comparison with empirical data on actual fuel plates.

  20. Fully kinetic particle simulations of high pressure streamer propagation

    NASA Astrophysics Data System (ADS)

    Rose, David; Welch, Dale; Thoma, Carsten; Clark, Robert

    2012-10-01

    Streamer and leader formation in high pressure devices is a dynamic process involving a hierarchy of physical phenomena. These include elastic and inelastic particle collisions in the gas, radiation generation, transport and absorption, and electrode interactions. We have performed 2D and 3D fully EM implicit particle-in-cell simulation model of gas breakdown leading to streamer formation under DC and RF fields. The model uses a Monte Carlo treatment for all particle interactions and includes discrete photon generation, transport, and absorption for ultra-violet and soft x-ray radiation. Central to the realization of this fully kinetic particle treatment is an algorithm [D. R. Welch, et al., J. Comp. Phys. 227, 143 (2007)] that manages the total particle count by species while preserving the local momentum distribution functions and conserving charge. These models are being applied to the analysis of high-pressure gas switches [D. V. Rose, et al., Phys. Plasmas 18, 093501 (2011)] and gas-filled RF accelerator cavities [D. V. Rose, et al. Proc. IPAC12, to appear].

  1. Exponentially more precise quantum simulation of fermions in the configuration interaction representation

    NASA Astrophysics Data System (ADS)

    Babbush, Ryan; Berry, Dominic W.; Sanders, Yuval R.; Kivlichan, Ian D.; Scherer, Artur; Wei, Annie Y.; Love, Peter J.; Aspuru-Guzik, Alán

    2018-01-01

    We present a quantum algorithm for the simulation of molecular systems that is asymptotically more efficient than all previous algorithms in the literature in terms of the main problem parameters. As in Babbush et al (2016 New Journal of Physics 18, 033032), we employ a recently developed technique for simulating Hamiltonian evolution using a truncated Taylor series to obtain logarithmic scaling with the inverse of the desired precision. The algorithm of this paper involves simulation under an oracle for the sparse, first-quantized representation of the molecular Hamiltonian known as the configuration interaction (CI) matrix. We construct and query the CI matrix oracle to allow for on-the-fly computation of molecular integrals in a way that is exponentially more efficient than classical numerical methods. Whereas second-quantized representations of the wavefunction require \\widetilde{{ O }}(N) qubits, where N is the number of single-particle spin-orbitals, the CI matrix representation requires \\widetilde{{ O }}(η ) qubits, where η \\ll N is the number of electrons in the molecule of interest. We show that the gate count of our algorithm scales at most as \\widetilde{{ O }}({η }2{N}3t).

  2. Research on illumination uniformity of high-power LED array light source

    NASA Astrophysics Data System (ADS)

    Yu, Xiaolong; Wei, Xueye; Zhang, Ou; Zhang, Xinwei

    2018-06-01

    Uniform illumination is one of the most important problem that must be solved in the application of high-power LED array. A numerical optimization algorithm, is applied to obtain the best LED array typesetting so that the light intensity of the target surface is evenly distributed. An evaluation function is set up through the standard deviation of the illuminance function, then the particle swarm optimization algorithm is utilized to optimize different arrays. Furthermore, the light intensity distribution is obtained by optical ray tracing method. Finally, a hybrid array is designed and the optical ray tracing method is applied to simulate the array. The simulation results, which is consistent with the traditional theoretical calculation, show that the algorithm introduced in this paper is reasonable and effective.

  3. OSIRIS - an object-oriented parallel 3D PIC code for modeling laser and particle beam-plasma interaction

    NASA Astrophysics Data System (ADS)

    Hemker, Roy

    1999-11-01

    The advances in computational speed make it now possible to do full 3D PIC simulations of laser plasma and beam plasma interactions, but at the same time the increased complexity of these problems makes it necessary to apply modern approaches like object oriented programming to the development of simulation codes. We report here on our progress in developing an object oriented parallel 3D PIC code using Fortran 90. In its current state the code contains algorithms for 1D, 2D, and 3D simulations in cartesian coordinates and for 2D cylindrically-symmetric geometry. For all of these algorithms the code allows for a moving simulation window and arbitrary domain decomposition for any number of dimensions. Recent 3D simulation results on the propagation of intense laser and electron beams through plasmas will be presented.

  4. Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems

    PubMed Central

    Huang, Shuqiang; Tao, Ming

    2017-01-01

    Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K-center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms. PMID:28117735

  5. Fast Multipole Methods for Three-Dimensional N-body Problems

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.

    1995-01-01

    We are developing computational tools for the simulations of three-dimensional flows past bodies undergoing arbitrary motions. High resolution viscous vortex methods have been developed that allow for extended simulations of two-dimensional configurations such as vortex generators. Our objective is to extend this methodology to three dimensions and develop a robust computational scheme for the simulation of such flows. A fundamental issue in the use of vortex methods is the ability of employing efficiently large numbers of computational elements to resolve the large range of scales that exist in complex flows. The traditional cost of the method scales as Omicron (N(sup 2)) as the N computational elements/particles induce velocities at each other, making the method unacceptable for simulations involving more than a few tens of thousands of particles. In the last decade fast methods have been developed that have operation counts of Omicron (N log N) or Omicron (N) (referred to as BH and GR respectively) depending on the details of the algorithm. These methods are based on the observation that the effect of a cluster of particles at a certain distance may be approximated by a finite series expansion. In order to exploit this observation we need to decompose the element population spatially into clusters of particles and build a hierarchy of clusters (a tree data structure) - smaller neighboring clusters combine to form a cluster of the next size up in the hierarchy and so on. This hierarchy of clusters allows one to determine efficiently when the approximation is valid. This algorithm is an N-body solver that appears in many fields of engineering and science. Some examples of its diverse use are in astrophysics, molecular dynamics, micro-magnetics, boundary element simulations of electromagnetic problems, and computer animation. More recently these N-body solvers have been implemented and applied in simulations involving vortex methods. Koumoutsakos and Leonard (1995) implemented the GR scheme in two dimensions for vector computer architectures allowing for simulations of bluff body flows using millions of particles. Winckelmans presented three-dimensional, viscous simulations of interacting vortex rings, using vortons and an implementation of a BH scheme for parallel computer architectures. Bhatt presented a vortex filament method to perform inviscid vortex ring interactions, with an alternative implementation of a BH scheme for a Connection Machine parallel computer architecture.

  6. New algorithms for field-theoretic block copolymer simulations: Progress on using adaptive-mesh refinement and sparse matrix solvers in SCFT calculations

    NASA Astrophysics Data System (ADS)

    Sides, Scott; Jamroz, Ben; Crockett, Robert; Pletzer, Alexander

    2012-02-01

    Self-consistent field theory (SCFT) for dense polymer melts has been highly successful in describing complex morphologies in block copolymers. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. The modified diffusion equations that arise as a consequence of the coarse-graining procedure in the SCF theory can be efficiently solved with a pseudo-spectral (PS) method that uses fast-Fourier transforms on uniform Cartesian grids. However, PS methods can be difficult to apply in many block copolymer SCFT simulations (eg. confinement, interface adsorption) in which small spatial regions might require finer resolution than most of the simulation grid. Progress on using new solver algorithms to address these problems will be presented. The Tech-X Chompst project aims at marrying the best of adaptive mesh refinement with linear matrix solver algorithms. The Tech-X code PolySwift++ is an SCFT simulation platform that leverages ongoing development in coupling Chombo, a package for solving PDEs via block-structured AMR calculations and embedded boundaries, with PETSc, a toolkit that includes a large assortment of sparse linear solvers.

  7. The Particle Accelerator Simulation Code PyORBIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M

    2015-01-01

    The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT ismore » an open source code accessible to the public through the Google Open Source Projects Hosting service.« less

  8. OpenRBC: Redefining the Frontier of Red Blood Cell Simulations at Protein Resolution

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Lu, Lu; Li, He; Grinberg, Leopold; Sachdeva, Vipin; Evangelinos, Constantinos; Karniadakis, George

    We present a from-scratch development of OpenRBC, a coarse-grained molecular dynamics code, which is capable of performing an unprecedented in silico experiment - simulating an entire mammal red blood cell lipid bilayer and cytoskeleton modeled by 4 million mesoscopic particles - on a single shared memory node. To achieve this, we invented an adaptive spatial searching algorithm to accelerate the computation of short-range pairwise interactions in an extremely sparse 3D space. The algorithm is based on a Voronoi partitioning of the point cloud of coarse-grained particles, and is continuously updated over the course of the simulation. The algorithm enables the construction of a lattice-free cell list, i.e. the key spatial searching data structure in our code, in O (N) time and space space with cells whose position and shape adapts automatically to the local density and curvature. The code implements NUMA/NUCA-aware OpenMP parallelization and achieves perfect scaling with up to hundreds of hardware threads. The code outperforms a legacy solver by more than 8 times in time-to-solution and more than 20 times in problem size, thus providing a new venue for probing the cytomechanics of red blood cells. This work was supported by the Department of Energy (DOE) Collaboratory on Mathematics for Mesoscopic Model- ing of Materials (CM4). YHT acknowledges partial financial support from an IBM Ph.D. Scholarship Award.

  9. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zuwei; Zhao, Haibo, E-mail: klinsmannzhb@163.com; Zheng, Chuguang

    2015-01-15

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule providesmore » a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are demonstrated in a physically realistic Brownian coagulation case. The computational accuracy is validated with benchmark solution of discrete-sectional method. The simulation results show that the comprehensive approach can attain very favorable improvement in cost without sacrificing computational accuracy.« less

  10. A comprehensive simulation framework for imaging single particles and biomolecules at the European X-ray Free-Electron Laser

    PubMed Central

    Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.; Samoylova, Liubov; Buzmakov, Alexey; Jurek, Zoltan; Ziaja, Beata; Santra, Robin; Loh, N. Duane; Tschentscher, Thomas; Mancuso, Adrian P.

    2016-01-01

    The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy and incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. We demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design. PMID:27109208

  11. A comprehensive simulation framework for imaging single particles and biomolecules at the European X-ray Free-Electron Laser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.

    The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy andmore » incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. Furthermore, we demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design.« less

  12. A comprehensive simulation framework for imaging single particles and biomolecules at the European X-ray Free-Electron Laser

    DOE PAGES

    Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.; ...

    2016-04-25

    The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy andmore » incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. Furthermore, we demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design.« less

  13. Solar wind interaction with Venus and Mars in a parallel hybrid code

    NASA Astrophysics Data System (ADS)

    Jarvinen, Riku; Sandroos, Arto

    2013-04-01

    We discuss the development and applications of a new parallel hybrid simulation, where ions are treated as particles and electrons as a charge-neutralizing fluid, for the interaction between the solar wind and Venus and Mars. The new simulation code under construction is based on the algorithm of the sequential global planetary hybrid model developed at the Finnish Meteorological Institute (FMI) and on the Corsair parallel simulation platform also developed at the FMI. The FMI's sequential hybrid model has been used for studies of plasma interactions of several unmagnetized and weakly magnetized celestial bodies for more than a decade. Especially, the model has been used to interpret in situ particle and magnetic field observations from plasma environments of Mars, Venus and Titan. Further, Corsair is an open source MPI (Message Passing Interface) particle and mesh simulation platform, mainly aimed for simulations of diffusive shock acceleration in solar corona and interplanetary space, but which is now also being extended for global planetary hybrid simulations. In this presentation we discuss challenges and strategies of parallelizing a legacy simulation code as well as possible applications and prospects of a scalable parallel hybrid model for the solar wind interactions of Venus and Mars.

  14. Rapid sampling of stochastic displacements in Brownian dynamics simulations with stresslet constraints.

    PubMed

    Fiore, Andrew M; Swan, James W

    2018-01-28

    Brownian Dynamics simulations are an important tool for modeling the dynamics of soft matter. However, accurate and rapid computations of the hydrodynamic interactions between suspended, microscopic components in a soft material are a significant computational challenge. Here, we present a new method for Brownian dynamics simulations of suspended colloidal scale particles such as colloids, polymers, surfactants, and proteins subject to a particular and important class of hydrodynamic constraints. The total computational cost of the algorithm is practically linear with the number of particles modeled and can be further optimized when the characteristic mass fractal dimension of the suspended particles is known. Specifically, we consider the so-called "stresslet" constraint for which suspended particles resist local deformation. This acts to produce a symmetric force dipole in the fluid and imparts rigidity to the particles. The presented method is an extension of the recently reported positively split formulation for Ewald summation of the Rotne-Prager-Yamakawa mobility tensor to higher order terms in the hydrodynamic scattering series accounting for force dipoles [A. M. Fiore et al., J. Chem. Phys. 146(12), 124116 (2017)]. The hydrodynamic mobility tensor, which is proportional to the covariance of particle Brownian displacements, is constructed as an Ewald sum in a novel way which guarantees that the real-space and wave-space contributions to the sum are independently symmetric and positive-definite for all possible particle configurations. This property of the Ewald sum is leveraged to rapidly sample the Brownian displacements from a superposition of statistically independent processes with the wave-space and real-space contributions as respective covariances. The cost of computing the Brownian displacements in this way is comparable to the cost of computing the deterministic displacements. The addition of a stresslet constraint to the over-damped particle equations of motion leads to a stochastic differential algebraic equation (SDAE) of index 1, which is integrated forward in time using a mid-point integration scheme that implicitly produces stochastic displacements consistent with the fluctuation-dissipation theorem for the constrained system. Calculations for hard sphere dispersions are illustrated and used to explore the performance of the algorithm. An open source, high-performance implementation on graphics processing units capable of dynamic simulations of millions of particles and integrated with the software package HOOMD-blue is used for benchmarking and made freely available in the supplementary material.

  15. Siting and sizing of distributed generators based on improved simulated annealing particle swarm optimization.

    PubMed

    Su, Hongsheng

    2017-12-18

    Distributed power grids generally contain multiple diverse types of distributed generators (DGs). Traditional particle swarm optimization (PSO) and simulated annealing PSO (SA-PSO) algorithms have some deficiencies in site selection and capacity determination of DGs, such as slow convergence speed and easily falling into local trap. In this paper, an improved SA-PSO (ISA-PSO) algorithm is proposed by introducing crossover and mutation operators of genetic algorithm (GA) into SA-PSO, so that the capabilities of the algorithm are well embodied in global searching and local exploration. In addition, diverse types of DGs are made equivalent to four types of nodes in flow calculation by the backward or forward sweep method, and reactive power sharing principles and allocation theory are applied to determine initial reactive power value and execute subsequent correction, thus providing the algorithm a better start to speed up the convergence. Finally, a mathematical model of the minimum economic cost is established for the siting and sizing of DGs under the location and capacity uncertainties of each single DG. Its objective function considers investment and operation cost of DGs, grid loss cost, annual purchase electricity cost, and environmental pollution cost, and the constraints include power flow, bus voltage, conductor current, and DG capacity. Through applications in an IEEE33-node distributed system, it is found that the proposed method can achieve desirable economic efficiency and safer voltage level relative to traditional PSO and SA-PSO algorithms, and is a more effective planning method for the siting and sizing of DGs in distributed power grids.

  16. A stochastic framework for spot-scanning particle therapy.

    PubMed

    Robini, Marc; Yuemin Zhu; Wanyu Liu; Magnin, Isabelle

    2016-08-01

    In spot-scanning particle therapy, inverse treatment planning is usually limited to finding the optimal beam fluences given the beam trajectories and energies. We address the much more challenging problem of jointly optimizing the beam fluences, trajectories and energies. For this purpose, we design a simulated annealing algorithm with an exploration mechanism that balances the conflicting demands of a small mixing time at high temperatures and a reasonable acceptance rate at low temperatures. Numerical experiments substantiate the relevance of our approach and open new horizons to spot-scanning particle therapy.

  17. Chaos Quantum-Behaved Cat Swarm Optimization Algorithm and Its Application in the PV MPPT

    PubMed Central

    2017-01-01

    Cat Swarm Optimization (CSO) algorithm was put forward in 2006. Despite a faster convergence speed compared with Particle Swarm Optimization (PSO) algorithm, the application of CSO is greatly limited by the drawback of “premature convergence,” that is, the possibility of trapping in local optimum when dealing with nonlinear optimization problem with a large number of local extreme values. In order to surmount the shortcomings of CSO, Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed in this paper. Firstly, Quantum-behaved Cat Swarm Optimization (QCSO) algorithm improves the accuracy of the CSO algorithm, because it is easy to fall into the local optimum in the later stage. Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed by introducing tent map for jumping out of local optimum in this paper. Secondly, CQCSO has been applied in the simulation of five different test functions, showing higher accuracy and less time consumption than CSO and QCSO. Finally, photovoltaic MPPT model and experimental platform are established and global maximum power point tracking control strategy is achieved by CQCSO algorithm, the effectiveness and efficiency of which have been verified by both simulation and experiment. PMID:29181020

  18. Chaos Quantum-Behaved Cat Swarm Optimization Algorithm and Its Application in the PV MPPT.

    PubMed

    Nie, Xiaohua; Wang, Wei; Nie, Haoyao

    2017-01-01

    Cat Swarm Optimization (CSO) algorithm was put forward in 2006. Despite a faster convergence speed compared with Particle Swarm Optimization (PSO) algorithm, the application of CSO is greatly limited by the drawback of "premature convergence," that is, the possibility of trapping in local optimum when dealing with nonlinear optimization problem with a large number of local extreme values. In order to surmount the shortcomings of CSO, Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed in this paper. Firstly, Quantum-behaved Cat Swarm Optimization (QCSO) algorithm improves the accuracy of the CSO algorithm, because it is easy to fall into the local optimum in the later stage. Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed by introducing tent map for jumping out of local optimum in this paper. Secondly, CQCSO has been applied in the simulation of five different test functions, showing higher accuracy and less time consumption than CSO and QCSO. Finally, photovoltaic MPPT model and experimental platform are established and global maximum power point tracking control strategy is achieved by CQCSO algorithm, the effectiveness and efficiency of which have been verified by both simulation and experiment.

  19. Partnership for Edge Physics (EPSI), University of Texas Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moser, Robert; Carey, Varis; Michoski, Craig

    Simulations of tokamak plasmas require a number of inputs whose values are uncertain. The effects of these input uncertainties on the reliability of model predictions is of great importance when validating predictions by comparison to experimental observations, and when using the predictions for design and operation of devices. However, high fidelity simulation of tokamak plasmas, particular those aimed at characterization of the edge plasma physics, are computationally expensive, so lower cost surrogates are required to enable practical uncertainty estimates. Two surrogate modeling techniques have been explored in the context of tokamak plasma simulations using the XGC family of plasma simulationmore » codes. The first is a response surface surrogate, and the second is an augmented surrogate relying on scenario extrapolation. In addition, to reduce the costs of the XGC simulations, a particle resampling algorithm was developed, which allows marker particle distributions to be adjusted to maintain optimal importance sampling. This means that the total number of particles in and therefore the cost of a simulation can be reduced while maintaining the same accuracy.« less

  20. Extended Magnetohydrodynamics with Embedded Particle-in-Cell Simulation of Ganymede's Magnetosphere

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; Jia, Xianzhe; Markidis, Stefano; Peng, Ivy Bo; Chen, Yuxi; Daldorff, Lars K. S.; Tenishev, Valeriy M.; Borovikov, Dmitry; Haiducek, John D.; Gombosi, Tamas I.; hide

    2016-01-01

    We have recently developed a new modeling capability to embed the implicit particle-in-cell (PIC) model iPIC3D into the Block-Adaptive-Tree-Solarwind-Roe-Upwind-Scheme magnetohydrodynamic (MHD) model. The MHD with embedded PIC domains (MHO-EPIC) algorithm Is a two-way coupled kinetic-fluid model. As one of the very first applications of the MHD-EPIC algorithm, we simulate the Interaction between Jupiter's magnetospherlc plasma and Ganymede's magnetosphere. We compare the MHO-EPIC simulations with pure Hall MHD simulations and compare both model results with Galileo observations to assess the Importance of kinetic effects In controlling the configuration and dynamics of Ganymede's magnetosphere. We find that the Hall MHD and MHO-EPIC solutions are qualitatively similar, but there are significant quantitative differences. In particular. the density and pressure inside the magnetosphere show different distributions. For our baseline grid resolution the PIC solution is more dynamic than the Hall MHD simulation and it compares significantly better with the Galileo magnetic measurements than the Hall MHD solution. The power spectra of the observed and simulated magnetic field fluctuations agree extremely well for the MHD-EPIC model. The MHO-EPIC simulation also produced a few flux transfer events (FTEs) that have magnetic signatures very similar to an observed event. The simulation shows that the FTEs often exhibit complex 3-0 structures with their orientations changing substantially between the equatorial plane and the Galileo trajectory, which explains the magnetic signatures observed during the magnetopause crossings. The computational cost of the MHO-EPIC simulation was only about 4 times more than that of the Hall MHD simulation.

  1. Inversion of multiwavelength Raman lidar data for retrieval of bimodal aerosol size distribution

    NASA Astrophysics Data System (ADS)

    Veselovskii, Igor; Kolgotin, Alexei; Griaznov, Vadim; Müller, Detlef; Franke, Kathleen; Whiteman, David N.

    2004-02-01

    We report on the feasibility of deriving microphysical parameters of bimodal particle size distributions from Mie-Raman lidar based on a triple Nd:YAG laser. Such an instrument provides backscatter coefficients at 355, 532, and 1064 nm and extinction coefficients at 355 and 532 nm. The inversion method employed is Tikhonov's inversion with regularization. Special attention has been paid to extend the particle size range for which this inversion scheme works to ~10 μm, which makes this algorithm applicable to large particles, e.g., investigations concerning the hygroscopic growth of aerosols. Simulations showed that surface area, volume concentration, and effective radius are derived to an accuracy of ~50% for a variety of bimodal particle size distributions. For particle size distributions with an effective radius of <1 μm the real part of the complex refractive index was retrieved to an accuracy of +/-0.05, the imaginary part was retrieved to 50% uncertainty. Simulations dealing with a mode-dependent complex refractive index showed that an average complex refractive index is derived that lies between the values for the two individual modes. Thus it becomes possible to investigate external mixtures of particle size distributions, which, for example, might be present along continental rims along which anthropogenic pollution mixes with marine aerosols. Measurement cases obtained from the Institute for Tropospheric Research six-wavelength aerosol lidar observations during the Indian Ocean Experiment were used to test the capabilities of the algorithm for experimental data sets. A benchmark test was attempted for the case representing anthropogenic aerosols between a broken cloud deck. A strong contribution of particle volume in the coarse mode of the particle size distribution was found.

  2. Inversion of multiwavelength Raman lidar data for retrieval of bimodal aerosol size distribution.

    PubMed

    Veselovskii, Igor; Kolgotin, Alexei; Griaznov, Vadim; Müller, Detlef; Franke, Kathleen; Whiteman, David N

    2004-02-10

    We report on the feasibility of deriving microphysical parameters of bimodal particle size distributions from Mie-Raman lidar based on a triple Nd:YAG laser. Such an instrument provides backscatter coefficients at 355, 532, and 1064 nm and extinction coefficients at 355 and 532 nm. The inversion method employed is Tikhonov's inversion with regularization. Special attention has been paid to extend the particle size range for which this inversion scheme works to approximately 10 microm, which makes this algorithm applicable to large particles, e.g., investigations concerning the hygroscopic growth of aerosols. Simulations showed that surface area, volume concentration, and effective radius are derived to an accuracy of approximately 50% for a variety of bimodal particle size distributions. For particle size distributions with an effective radius of < 1 microm the real part of the complex refractive index was retrieved to an accuracy of +/- 0.05, the imaginary part was retrieved to 50% uncertainty. Simulations dealing with a mode-dependent complex refractive index showed that an average complex refractive index is derived that lies between the values for the two individual modes. Thus it becomes possible to investigate external mixtures of particle size distributions, which, for example, might be present along continental rims along which anthropogenic pollution mixes with marine aerosols. Measurement cases obtained from the Institute for Tropospheric Research six-wavelength aerosol lidar observations during the Indian Ocean Experiment were used to test the capabilities of the algorithm for experimental data sets. A benchmark test was attempted for the case representing anthropogenic aerosols between a broken cloud deck. A strong contribution of particle volume in the coarse mode of the particle size distribution was found.

  3. New Dandelion Algorithm Optimizes Extreme Learning Machine for Biomedical Classification Problems

    PubMed Central

    Li, Xiguang; Zhao, Liang; Gong, Changqing; Liu, Xiaojing

    2017-01-01

    Inspired by the behavior of dandelion sowing, a new novel swarm intelligence algorithm, namely, dandelion algorithm (DA), is proposed for global optimization of complex functions in this paper. In DA, the dandelion population will be divided into two subpopulations, and different subpopulations will undergo different sowing behaviors. Moreover, another sowing method is designed to jump out of local optimum. In order to demonstrate the validation of DA, we compare the proposed algorithm with other existing algorithms, including bat algorithm, particle swarm optimization, and enhanced fireworks algorithm. Simulations show that the proposed algorithm seems much superior to other algorithms. At the same time, the proposed algorithm can be applied to optimize extreme learning machine (ELM) for biomedical classification problems, and the effect is considerable. At last, we use different fusion methods to form different fusion classifiers, and the fusion classifiers can achieve higher accuracy and better stability to some extent. PMID:29085425

  4. Multilevel Monte Carlo and improved timestepping methods in atmospheric dispersion modelling

    NASA Astrophysics Data System (ADS)

    Katsiolides, Grigoris; Müller, Eike H.; Scheichl, Robert; Shardlow, Tony; Giles, Michael B.; Thomson, David J.

    2018-02-01

    A common way to simulate the transport and spread of pollutants in the atmosphere is via stochastic Lagrangian dispersion models. Mathematically, these models describe turbulent transport processes with stochastic differential equations (SDEs). The computational bottleneck is the Monte Carlo algorithm, which simulates the motion of a large number of model particles in a turbulent velocity field; for each particle, a trajectory is calculated with a numerical timestepping method. Choosing an efficient numerical method is particularly important in operational emergency-response applications, such as tracking radioactive clouds from nuclear accidents or predicting the impact of volcanic ash clouds on international aviation, where accurate and timely predictions are essential. In this paper, we investigate the application of the Multilevel Monte Carlo (MLMC) method to simulate the propagation of particles in a representative one-dimensional dispersion scenario in the atmospheric boundary layer. MLMC can be shown to result in asymptotically superior computational complexity and reduced computational cost when compared to the Standard Monte Carlo (StMC) method, which is currently used in atmospheric dispersion modelling. To reduce the absolute cost of the method also in the non-asymptotic regime, it is equally important to choose the best possible numerical timestepping method on each level. To investigate this, we also compare the standard symplectic Euler method, which is used in many operational models, with two improved timestepping algorithms based on SDE splitting methods.

  5. A measurement of material in the ATLAS tracker using secondary hadronic interactions in 7 TeV pp collisions

    DOE PAGES

    Aaboud, M.; Aad, G.; Abbott, B.; ...

    2016-11-30

    Knowledge of the material in the ATLAS inner tracking detector is crucial in understanding the reconstruction of charged-particle tracks, the performance of algorithms that identify jets containing b-hadrons and is also essential to reduce background in searches for exotic particles that can decay within the inner detector volume. Interactions of primary hadrons produced in pp collisions with the material in the inner detector are used to map the location and amount of this material. The hadronic interactions of primary particles may result in secondary vertices, which in this analysis are reconstructed by an inclusive vertex-finding algorithm. Data were collected usingmore » minimum-bias triggers by the ATLAS detector operating at the LHC during 2010 at centre-of-mass energy √s = 7 TeV, and correspond to an integrated luminosity of 19 nb -1. Kinematic properties of these secondary vertices are used to study the validity of the modelling of hadronic interactions in simulation. Finally, secondary-vertex yields are compared between data and simulation over a volume of about 0.7 m 3 around the interaction point, and agreement is found within overall uncertainties.« less

  6. A measurement of material in the ATLAS tracker using secondary hadronic interactions in 7 TeV pp collisions

    NASA Astrophysics Data System (ADS)

    Aaboud, M.; Aad, G.; Abbott, B.; Abdallah, J.; Abdinov, O.; Abeloos, B.; Aben, R.; AbouZeid, O. S.; Abraham, N. L.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Adelman, J.; Adomeit, S.; Adye, T.; Affolder, A. A.; Agatonovic-Jovin, T.; Agricola, J.; Aguilar-Saavedra, J. A.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akerstedt, H.; Åkesson, T. P. A.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albrand, S.; Alconada Verzini, M. J.; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexopoulos, T.; Alhroob, M.; Ali, B.; Aliev, M.; Alimonti, G.; Alison, J.; Alkire, S. P.; Allbrooke, B. M. M.; Allen, B. W.; Allport, P. P.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Alstaty, M.; Alvarez Gonzalez, B.; Álvarez Piqueras, D.; Alviggi, M. G.; Amadio, B. T.; Amako, K.; Amaral Coutinho, Y.; Amelung, C.; Amidei, D.; Amor Dos Santos, S. P.; Amorim, A.; Amoroso, S.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anders, J. K.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Angelidakis, S.; Angelozzi, I.; Anger, P.; Angerami, A.; Anghinolfi, F.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antel, C.; Antonelli, M.; Antonov, A.; Anulli, F.; Aoki, M.; Aperio Bella, L.; Arabidze, G.; Arai, Y.; Araque, J. P.; Arce, A. T. H.; Arduh, F. A.; Arguin, J.-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Armitage, L. J.; Arnaez, O.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Artz, S.; Asai, S.; Asbah, N.; Ashkenazi, A.; Åsman, B.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Augsten, K.; Avolio, G.; Axen, B.; Ayoub, M. K.; Azuelos, G.; Baak, M. A.; Baas, A. E.; Baca, M. J.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Bagiacchi, P.; Bagnaia, P.; Bai, Y.; Baines, J. T.; Baker, O. K.; Baldin, E. M.; Balek, P.; Balestri, T.; Balli, F.; Balunas, W. K.; Banas, E.; Banerjee, Sw.; Bannoura, A. A. E.; Barak, L.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisits, M.-S.; Barklow, T.; Barlow, N.; Barnes, S. L.; Barnett, B. M.; Barnett, R. M.; Barnovska, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barranco Navarro, L.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Bartoldus, R.; Barton, A. E.; Bartos, P.; Basalaev, A.; Bassalat, A.; Bates, R. L.; Batista, S. J.; Batley, J. R.; Battaglia, M.; Bauce, M.; Bauer, F.; Bawa, H. S.; Beacham, J. B.; Beattie, M. D.; Beau, T.; Beauchemin, P. H.; Bechtle, P.; Beck, H. P.; Becker, K.; Becker, M.; Beckingham, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bednyakov, V. A.; Bedognetti, M.; Bee, C. P.; Beemster, L. J.; Beermann, T. A.; Begel, M.; Behr, J. K.; Belanger-Champagne, C.; Bell, A. S.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Beltramello, O.; Belyaev, N. L.; Benary, O.; Benchekroun, D.; Bender, M.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Benhar Noccioli, E.; Benitez, J.; Benjamin, D. P.; Bensinger, J. R.; Bentvelsen, S.; Beresford, L.; Beretta, M.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Beringer, J.; Berlendis, S.; Bernard, N. R.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertolucci, F.; Bertram, I. A.; Bertsche, C.; Bertsche, D.; Besjes, G. J.; Bessidskaia Bylund, O.; Bessner, M.; Besson, N.; Betancourt, C.; Bethani, A.; Bethke, S.; Bevan, A. J.; Bianchi, R. M.; Bianchini, L.; Bianco, M.; Biebel, O.; Biedermann, D.; Bielski, R.; Biesuz, N. V.; Biglietti, M.; Bilbao De Mendizabal, J.; Billoud, T. R. V.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Biondi, S.; Bisanz, T.; Bjergaard, D. M.; Black, C. W.; Black, J. E.; Black, K. M.; Blackburn, D.; Blair, R. E.; Blanchard, J.-B.; Blazek, T.; Bloch, I.; Blocker, C.; Blum, W.; Blumenschein, U.; Blunier, S.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boehler, M.; Boerner, D.; Bogaerts, J. A.; Bogavac, D.; Bogdanchikov, A. G.; Bohm, C.; Boisvert, V.; Bokan, P.; Bold, T.; Boldyrev, A. S.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Bortfeldt, J.; Bortoletto, D.; Bortolotto, V.; Bos, K.; Boscherini, D.; Bosman, M.; Bossio Sola, J. D.; Boudreau, J.; Bouffard, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Boutle, S. K.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Breaden Madden, W. D.; Brendlinger, K.; Brennan, A. J.; Brenner, L.; Brenner, R.; Bressler, S.; Bristow, T. M.; Britton, D.; Britzger, D.; Brochu, F. M.; Brock, I.; Brock, R.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Broughton, J. H.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruneliere, R.; Bruni, A.; Bruni, G.; Bruni, L. S.; Brunt, BH; Bruschi, M.; Bruscino, N.; Bryant, P.; Bryngemark, L.; Buanes, T.; Buat, Q.; Buchholz, P.; Buckley, A. G.; Budagov, I. A.; Buehrer, F.; Bugge, M. K.; Bulekov, O.; Bullock, D.; Burckhart, H.; Burdin, S.; Burgard, C. D.; Burghgrave, B.; Burka, K.; Burke, S.; Burmeister, I.; Burr, J. T. P.; Busato, E.; Büscher, D.; Büscher, V.; Bussey, P.; Butler, J. M.; Buttar, C. M.; Butterworth, J. M.; Butti, P.; Buttinger, W.; Buzatu, A.; Buzykaev, A. R.; Cabrera Urbán, S.; Caforio, D.; Cairo, V. M.; Cakir, O.; Calace, N.; Calafiura, P.; Calandri, A.; Calderini, G.; Calfayan, P.; Callea, G.; Caloba, L. P.; Calvente Lopez, S.; Calvet, D.; Calvet, S.; Calvet, T. P.; Camacho Toro, R.; Camarda, S.; Camarri, P.; Cameron, D.; Caminal Armadans, R.; Camincher, C.; Campana, S.; Campanelli, M.; Camplani, A.; Campoverde, A.; Canale, V.; Canepa, A.; Cano Bret, M.; Cantero, J.; Cantrill, R.; Cao, T.; Capeans Garrido, M. D. M.; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Carbone, R. M.; Cardarelli, R.; Cardillo, F.; Carli, I.; Carli, T.; Carlino, G.; Carminati, L.; Caron, S.; Carquin, E.; Carrillo-Montoya, G. D.; Carter, J. R.; Carvalho, J.; Casadei, D.; Casado, M. P.; Casolino, M.; Casper, D. W.; Castaneda-Miranda, E.; Castelijn, R.; Castelli, A.; Castillo Gimenez, V.; Castro, N. F.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Caudron, J.; Cavaliere, V.; Cavallaro, E.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Cerda Alberich, L.; Cerio, B. C.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cerv, M.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chan, S. K.; Chan, Y. L.; Chang, P.; Chapman, J. D.; Charlton, D. G.; Chatterjee, A.; Chau, C. C.; Chavez Barajas, C. A.; Che, S.; Cheatham, S.; Chegwidden, A.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, H.; Chen, K.; Chen, S.; Chen, S.; Chen, X.; Chen, Y.; Cheng, H. C.; Cheng, H. J.; Cheng, Y.; Cheplakov, A.; Cheremushkina, E.; Cherkaoui El Moursli, R.; Chernyatin, V.; Cheu, E.; Chevalier, L.; Chiarella, V.; Chiarelli, G.; Chiodini, G.; Chisholm, A. S.; Chitan, A.; Chizhov, M. V.; Choi, K.; Chomont, A. R.; Chouridou, S.; Chow, B. K. B.; Christodoulou, V.; Chromek-Burckhart, D.; Chudoba, J.; Chuinard, A. J.; Chwastowski, J. J.; Chytka, L.; Ciapetti, G.; Ciftci, A. K.; Cinca, D.; Cindro, V.; Cioara, I. A.; Ciocca, C.; Ciocio, A.; Cirotto, F.; Citron, Z. H.; Citterio, M.; Ciubancan, M.; Clark, A.; Clark, B. L.; Clark, M. R.; Clark, P. J.; Clarke, R. N.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Colasurdo, L.; Cole, B.; Colijn, A. P.; Collot, J.; Colombo, T.; Compostella, G.; Conde Muiño, P.; Coniavitis, E.; Connell, S. H.; Connelly, I. A.; Consorti, V.; Constantinescu, S.; Conti, G.; Conventi, F.; Cooke, M.; Cooper, B. D.; Cooper-Sarkar, A. M.; Cormier, K. J. R.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Cottin, G.; Cowan, G.; Cox, B. E.; Cranmer, K.; Crawley, S. J.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Cribbs, W. A.; Crispin Ortuzar, M.; Cristinziani, M.; Croft, V.; Crosetti, G.; Cueto, A.; Cuhadar Donszelmann, T.; Cummings, J.; Curatolo, M.; Cúth, J.; Czirr, H.; Czodrowski, P.; D'amen, G.; D'Auria, S.; D'Onofrio, M.; Da Cunha Sargedas De Sousa, M. J.; Da Via, C.; Dabrowski, W.; Dado, T.; Dai, T.; Dale, O.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Dandoy, J. R.; Dang, N. P.; Daniells, A. C.; Dann, N. S.; Danninger, M.; Dano Hoffmann, M.; Dao, V.; Darbo, G.; Darmora, S.; Dassoulas, J.; Dattagupta, A.; Davey, W.; David, C.; Davidek, T.; Davies, M.; Davison, P.; Dawe, E.; Dawson, I.; Daya-Ishmukhametova, R. K.; De, K.; de Asmundis, R.; De Benedetti, A.; De Castro, S.; De Cecco, S.; De Groot, N.; de Jong, P.; De la Torre, H.; De Lorenzi, F.; De Maria, A.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Vivie De Regie, J. B.; Dearnaley, W. J.; Debbe, R.; Debenedetti, C.; Dedovich, D. V.; Dehghanian, N.; Deigaard, I.; Del Gaudio, M.; Del Peso, J.; Del Prete, T.; Delgove, D.; Deliot, F.; Delitzsch, C. M.; Dell'Acqua, A.; Dell'Asta, L.; Dell'Orso, M.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delsart, P. A.; DeMarco, D. A.; Demers, S.; Demichev, M.; Demilly, A.; Denisov, S. P.; Denysiuk, D.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deterre, C.; Dette, K.; Deviveiros, P. O.; Dewhurst, A.; Dhaliwal, S.; Di Ciaccio, A.; Di Ciaccio, L.; Di Clemente, W. K.; Di Donato, C.; Di Girolamo, A.; Di Girolamo, B.; Di Micco, B.; Di Nardo, R.; Di Simone, A.; Di Sipio, R.; Di Valentino, D.; Diaconu, C.; Diamond, M.; Dias, F. A.; Diaz, M. A.; Diehl, E. B.; Dietrich, J.; Diglio, S.; Dimitrievska, A.; Dingfelder, J.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; Djuvsland, J. I.; do Vale, M. A. B.; Dobos, D.; Dobre, M.; Doglioni, C.; Dolejsi, J.; Dolezal, Z.; Donadelli, M.; Donati, S.; Dondero, P.; Donini, J.; Dopke, J.; Doria, A.; Dova, M. T.; Doyle, A. T.; Drechsler, E.; Dris, M.; Du, Y.; Duarte-Campderros, J.; Duchovni, E.; Duckeck, G.; Ducu, O. A.; Duda, D.; Dudarev, A.; Dudder, A. Chr.; Duffield, E. M.; Duflot, L.; Dührssen, M.; Dumancic, M.; Dunford, M.; Duran Yildiz, H.; Düren, M.; Durglishvili, A.; Duschinger, D.; Dutta, B.; Dyndal, M.; Eckardt, C.; Ecker, K. M.; Edgar, R. C.; Edwards, N. C.; Eifert, T.; Eigen, G.; Einsweiler, K.; Ekelof, T.; El Kacimi, M.; Ellajosyula, V.; Ellert, M.; Elles, S.; Ellinghaus, F.; Elliot, A. A.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Endner, O. C.; Ennis, J. S.; Erdmann, J.; Ereditato, A.; Ernis, G.; Ernst, J.; Ernst, M.; Errede, S.; Ertel, E.; Escalier, M.; Esch, H.; Escobar, C.; Esposito, B.; Etienvre, A. I.; Etzion, E.; Evans, H.; Ezhilov, A.; Fabbri, F.; Fabbri, L.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Falla, R. J.; Faltova, J.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farina, C.; Farina, E. M.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Faucci Giannelli, M.; Favareto, A.; Fawcett, W. J.; Fayard, L.; Fedin, O. L.; Fedorko, W.; Feigl, S.; Feligioni, L.; Feng, C.; Feng, E. J.; Feng, H.; Fenyuk, A. B.; Feremenga, L.; Fernandez Martinez, P.; Fernandez Perez, S.; Ferrando, J.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferreira de Lima, D. E.; Ferrer, A.; Ferrere, D.; Ferretti, C.; Ferretto Parodi, A.; Fiedler, F.; Filipčič, A.; Filipuzzi, M.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Firan, A.; Fischer, A.; Fischer, C.; Fischer, J.; Fisher, W. C.; Flaschel, N.; Fleck, I.; Fleischmann, P.; Fletcher, G. T.; Fletcher, R. R. M.; Flick, T.; Floderus, A.; Flores Castillo, L. R.; Flowerdew, M. J.; Forcolin, G. T.; Formica, A.; Forti, A.; Foster, A. G.; Fournier, D.; Fox, H.; Fracchia, S.; Francavilla, P.; Franchini, M.; Francis, D.; Franconi, L.; Franklin, M.; Frate, M.; Fraternali, M.; Freeborn, D.; Fressard-Batraneanu, S. M.; Friedrich, F.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Fullana Torregrosa, E.; Fusayasu, T.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gach, G. P.; Gadatsch, S.; Gadomski, S.; Gagliardi, G.; Gagnon, L. G.; Gagnon, P.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Gao, J.; Gao, Y.; Gao, Y. S.; Garay Walls, F. M.; García, C.; García Navarro, J. E.; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Gascon Bravo, A.; Gasnikova, K.; Gatti, C.; Gaudiello, A.; Gaudio, G.; Gauthier, L.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Gecse, Z.; Gee, C. N. P.; Geich-Gimbel, Ch.; Geisen, M.; Geisler, M. P.; Gemme, C.; Genest, M. H.; Geng, C.; Gentile, S.; Gentsos, C.; George, S.; Gerbaudo, D.; Gershon, A.; Ghasemi, S.; Ghazlane, H.; Ghneimat, M.; Giacobbe, B.; Giagu, S.; Giannetti, P.; Gibbard, B.; Gibson, S. M.; Gignac, M.; Gilchriese, M.; Gillam, T. P. S.; Gillberg, D.; Gilles, G.; Gingrich, D. M.; Giokaris, N.; Giordani, M. P.; Giorgi, F. M.; Giorgi, F. M.; Giraud, P. F.; Giromini, P.; Giugni, D.; Giuli, F.; Giuliani, C.; Giulini, M.; Gjelsten, B. K.; Gkaitatzis, S.; Gkialas, I.; Gkougkousis, E. L.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glaysher, P. C. F.; Glazov, A.; Goblirsch-Kolb, M.; Godlewski, J.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Gonçalo, R.; Goncalves Pinto Firmino Da Costa, J.; Gonella, G.; Gonella, L.; Gongadze, A.; González de la Hoz, S.; Gonzalez Parra, G.; Gonzalez-Sevilla, S.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorini, B.; Gorini, E.; Gorišek, A.; Gornicki, E.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Goudet, C. R.; Goujdami, D.; Goussiou, A. G.; Govender, N.; Gozani, E.; Graber, L.; Grabowska-Bold, I.; Gradin, P. O. J.; Grafström, P.; Gramling, J.; Gramstad, E.; Grancagnolo, S.; Gratchev, V.; Gravila, P. M.; Gray, H. M.; Graziani, E.; Greenwood, Z. D.; Grefe, C.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Grevtsov, K.; Griffiths, J.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grivaz, J.-F.; Groh, S.; Grohs, J. P.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Grout, Z. J.; Guan, L.; Guan, W.; Guenther, J.; Guescini, F.; Guest, D.; Gueta, O.; Guido, E.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Guo, J.; Guo, Y.; Gupta, R.; Gupta, S.; Gustavino, G.; Gutierrez, P.; Gutierrez Ortiz, N. G.; Gutschow, C.; Guyot, C.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haddad, N.; Hadef, A.; Hageböck, S.; Hajduk, Z.; Hakobyan, H.; Haleem, M.; Haley, J.; Halladjian, G.; Hallewell, G. D.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamilton, A.; Hamity, G. N.; Hamnett, P. G.; Han, L.; Hanagaki, K.; Hanawa, K.; Hance, M.; Haney, B.; Hanisch, S.; Hanke, P.; Hanna, R.; Hansen, J. B.; Hansen, J. D.; Hansen, M. C.; Hansen, P. H.; Hara, K.; Hard, A. S.; Harenberg, T.; Hariri, F.; Harkusha, S.; Harrington, R. D.; Harrison, P. F.; Hartjes, F.; Hartmann, N. M.; Hasegawa, M.; Hasegawa, Y.; Hasib, A.; Hassani, S.; Haug, S.; Hauser, R.; Hauswald, L.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hayakawa, D.; Hayden, D.; Hays, C. P.; Hays, J. M.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heim, S.; Heim, T.; Heinemann, B.; Heinrich, J. J.; Heinrich, L.; Heinz, C.; Hejbal, J.; Helary, L.; Hellman, S.; Helsens, C.; Henderson, J.; Henderson, R. C. W.; Heng, Y.; Henkelmann, S.; Henriques Correia, A. M.; Henrot-Versille, S.; Herbert, G. H.; Herget, V.; Hernández Jiménez, Y.; Herten, G.; Hertenberger, R.; Hervas, L.; Hesketh, G. G.; Hessey, N. P.; Hetherly, J. W.; Hickling, R.; Higón-Rodriguez, E.; Hill, E.; Hill, J. C.; Hiller, K. H.; Hillier, S. J.; Hinchliffe, I.; Hines, E.; Hinman, R. R.; Hirose, M.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoenig, F.; Hohn, D.; Holmes, T. R.; Homann, M.; Hong, T. M.; Hooberman, B. H.; Hopkins, W. H.; Horii, Y.; Horton, A. J.; Hostachy, J.-Y.; Hou, S.; Hoummada, A.; Howarth, J.; Hrabovsky, M.; Hristova, I.; Hrivnac, J.; Hryn'ova, T.; Hrynevich, A.; Hsu, C.; Hsu, P. J.; Hsu, S.-C.; Hu, D.; Hu, Q.; Hu, S.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T. B.; Hughes, E. W.; Hughes, G.; Huhtinen, M.; Huo, P.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Ideal, E.; Idrissi, Z.; Iengo, P.; Igonkina, O.; Iizawa, T.; Ikegami, Y.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ilic, N.; Ince, T.; Introzzi, G.; Ioannou, P.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Ishijima, N.; Ishino, M.; Ishitsuka, M.; Ishmukhametov, R.; Issever, C.; Istin, S.; Ito, F.; Ponce, J. M. Iturbe; Iuppa, R.; Iwanski, W.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jabbar, S.; Jackson, B.; Jackson, P.; Jain, V.; Jakobi, K. B.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jamin, D. O.; Jana, D. K.; Jansen, E.; Jansky, R.; Janssen, J.; Janus, M.; Jarlskog, G.; Javadov, N.; Javůrek, T.; Jeanneau, F.; Jeanty, L.; Jejelava, J.; Jeng, G.-Y.; Jennens, D.; Jenni, P.; Jeske, C.; Jézéquel, S.; Ji, H.; Jia, J.; Jiang, H.; Jiang, Y.; Jiggins, S.; Jimenez Pena, J.; Jin, S.; Jinaru, A.; Jinnouchi, O.; Jivan, H.; Johansson, P.; Johns, K. A.; Johnson, W. J.; Jon-And, K.; Jones, G.; Jones, R. W. L.; Jones, S.; Jones, T. J.; Jongmanns, J.; Jorge, P. M.; Jovicevic, J.; Ju, X.; Juste Rozas, A.; Köhler, M. K.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kahn, S. J.; Kaji, T.; Kajomovitz, E.; Kalderon, C. W.; Kaluza, A.; Kama, S.; Kamenshchikov, A.; Kanaya, N.; Kaneti, S.; Kanjir, L.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kaplan, L. S.; Kapliy, A.; Kar, D.; Karakostas, K.; Karamaoun, A.; Karastathis, N.; Kareem, M. J.; Karentzos, E.; Karnevskiy, M.; Karpov, S. N.; Karpova, Z. M.; Karthik, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kasahara, K.; Kashif, L.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Kato, C.; Katre, A.; Katzy, J.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kazanin, V. F.; Keeler, R.; Kehoe, R.; Keller, J. S.; Kempster, J. J.; Kawade, K.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Keyes, R. A.; Khader, M.; Khalil-zada, F.; Khanov, A.; Kharlamov, A. G.; Khoo, T. J.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kido, S.; Kilby, C. R.; Kim, H. Y.; Kim, S. H.; Kim, Y. K.; Kimura, N.; Kind, O. M.; King, B. T.; King, M.; King, S. B.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kiss, F.; Kiuchi, K.; Kivernyk, O.; Kladiva, E.; Klein, M. H.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klinger, J. A.; Klioutchnikova, T.; Kluge, E.-E.; Kluit, P.; Kluth, S.; Knapik, J.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, A.; Kobayashi, D.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koehler, N. M.; Koffas, T.; Koffeman, E.; Koi, T.; Kolanoski, H.; Kolb, M.; Koletsou, I.; Komar, A. A.; Komori, Y.; Kondo, T.; Kondrashova, N.; Köneke, K.; König, A. C.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Kopeliansky, R.; Koperny, S.; Köpke, L.; Kopp, A. K.; Korcyl, K.; Kordas, K.; Korn, A.; Korol, A. A.; Korolkov, I.; Korolkova, E. V.; Kortner, O.; Kortner, S.; Kosek, T.; Kostyukhin, V. V.; Kotwal, A.; Kourkoumeli-Charalampidi, A.; Kourkoumelis, C.; Kouskoura, V.; Kowalewska, A. B.; Kowalewski, R.; Kowalski, T. Z.; Kozakai, C.; Kozanecki, W.; Kozhin, A. S.; Kramarenko, V. A.; Kramberger, G.; Krasnopevtsev, D.; Krasny, M. W.; Krasznahorkay, A.; Kravchenko, A.; Kretz, M.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, P.; Krizka, K.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Krumnack, N.; Kruse, A.; Kruse, M. C.; Kruskal, M.; Kubota, T.; Kucuk, H.; Kuday, S.; Kuechler, J. T.; Kuehn, S.; Kugel, A.; Kuger, F.; Kuhl, A.; Kuhl, T.; Kukhtin, V.; Kukla, R.; Kulchitsky, Y.; Kuleshov, S.; Kuna, M.; Kunigo, T.; Kupco, A.; Kurashige, H.; Kurochkin, Y. A.; Kus, V.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; Kwan, T.; Kyriazopoulos, D.; La Rosa, A.; La Rosa Navarro, J. L.; La Rotonda, L.; Lacasta, C.; Lacava, F.; Lacey, J.; Lacker, H.; Lacour, D.; Lacuesta, V. R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Lammers, S.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lanfermann, M. C.; Lang, V. S.; Lange, J. C.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Laplace, S.; Lapoire, C.; Laporte, J. F.; Lari, T.; Lasagni Manghi, F.; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Law, A. T.; Laycock, P.; Lazovich, T.; Lazzaroni, M.; Le, B.; Le Dortz, O.; Le Guirriec, E.; Le Quilleuc, E. P.; LeBlanc, M.; LeCompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, S. C.; Lee, L.; Lefebvre, B.; Lefebvre, G.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehan, A.; Lehmann Miotto, G.; Lei, X.; Leight, W. A.; Leisos, A.; Leister, A. G.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Leney, K. J. C.; Lenz, T.; Lenzi, B.; Leone, R.; Leone, S.; Leonidopoulos, C.; Leontsinis, S.; Lerner, G.; Leroy, C.; Lesage, A. A. J.; Lester, C. G.; Levchenko, M.; Levêque, J.; Levin, D.; Levinson, L. J.; Levy, M.; Lewis, D.; Leyko, A. M.; Leyton, M.; Li, B.; Li, C.; Li, H.; Li, H. L.; Li, L.; Li, L.; Li, Q.; Li, S.; Li, X.; Li, Y.; Liang, Z.; Liberti, B.; Liblong, A.; Lichard, P.; Lie, K.; Liebal, J.; Liebig, W.; Limosani, A.; Lin, S. C.; Lin, T. H.; Lindquist, B. E.; Lionti, A. E.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lister, A.; Litke, A. M.; Liu, B.; Liu, D.; Liu, H.; Liu, H.; Liu, J.; Liu, J. B.; Liu, K.; Liu, L.; Liu, M.; Liu, M.; Liu, Y. L.; Liu, Y.; Livan, M.; Lleres, A.; Llorente Merino, J.; Lloyd, S. L.; Lo Sterzo, F.; Lobodzinska, E.; Loch, P.; Lockman, W. S.; Loebinger, F. K.; Loevschall-Jensen, A. E.; Loew, K. M.; Loginov, A.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Long, B. A.; Long, J. D.; Long, R. E.; Longo, L.; Looper, K. A.; Lopes, L.; Lopez Mateos, D.; Lopez Paredes, B.; Lopez Paz, I.; Lopez Solis, A.; Lorenz, J.; Martinez, N. Lorenzo; Losada, M.; Lösel, P. J.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lu, H.; Lu, N.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Luedtke, C.; Luehring, F.; Lukas, W.; Luminari, L.; Lundberg, O.; Lund-Jensen, B.; Luzi, P. M.; Lynn, D.; Lysak, R.; Lytken, E.; Lyubushkin, V.; Ma, H.; Ma, L. L.; Ma, Y.; Maccarrone, G.; Macchiolo, A.; Macdonald, C. M.; Maček, B.; Machado Miguens, J.; Madaffari, D.; Madar, R.; Maddocks, H. J.; Mader, W. F.; Madsen, A.; Maeda, J.; Maeland, S.; Maeno, T.; Maevskiy, A.; Magradze, E.; Mahlstedt, J.; Maiani, C.; Maidantchik, C.; Maier, A. A.; Maier, T.; Maio, A.; Majewski, S.; Makida, Y.; Makovec, N.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyukov, S.; Mamuzic, J.; Mancini, G.; Mandelli, B.; Mandelli, L.; Mandić, I.; Maneira, J.; Filho, L. Manhaes de Andrade; Manjarres Ramos, J.; Mann, A.; Manousos, A.; Mansoulie, B.; Mansour, J. D.; Mantifel, R.; Mantoani, M.; Manzoni, S.; Mapelli, L.; Marceca, G.; March, L.; Marchiori, G.; Marcisovsky, M.; Marjanovic, M.; Marley, D. E.; Marroquim, F.; Marsden, S. P.; Marshall, Z.; Marti-Garcia, S.; Martin, B.; Martin, T. A.; Martin, V. J.; dit Latour, B. Martin; Martinez, M.; Martinez Outschoorn, V. I.; Martin-Haugh, S.; Martoiu, V. S.; Martyniuk, A. C.; Marx, M.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Massa, I.; Massa, L.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Mättig, P.; Mattmann, J.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Mazza, S. M.; McFadden, N. C.; McGoldrick, G.; McKee, S. P.; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McClymont, L. I.; McDonald, E. F.; Mcfayden, J. A.; Mchedlidze, G.; McMahon, S. J.; McPherson, R. A.; Medinnis, M.; Meehan, S.; Mehlhase, S.; Mehta, A.; Meier, K.; Meineck, C.; Meirose, B.; Melini, D.; Mellado Garcia, B. R.; Melo, M.; Meloni, F.; Mengarelli, A.; Menke, S.; Meoni, E.; Mergelmeyer, S.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Theenhausen, H. Meyer Zu; Miano, F.; Middleton, R. P.; Miglioranzi, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Milesi, M.; Milic, A.; Miller, D. W.; Mills, C.; Milov, A.; Milstead, D. A.; Minaenko, A. A.; Minami, Y.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L. M.; Mistry, K. P.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Miucci, A.; Miyagawa, P. S.; Mjörnmark, J. U.; Moa, T.; Mochizuki, K.; Mohapatra, S.; Molander, S.; Moles-Valls, R.; Monden, R.; Mondragon, M. C.; Mönig, K.; Monk, J.; Monnier, E.; Montalbano, A.; Montejo Berlingen, J.; Monticelli, F.; Monzani, S.; Moore, R. W.; Morange, N.; Moreno, D.; Moreno Llácer, M.; Morettini, P.; Mori, D.; Mori, T.; Morii, M.; Morinaga, M.; Morisbak, V.; Moritz, S.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Mortensen, S. S.; Morvaj, L.; Mosidze, M.; Moss, J.; Motohashi, K.; Mount, R.; Mountricha, E.; Mouraviev, S. V.; Moyse, E. J. W.; Muanza, S.; Mudd, R. D.; Mueller, F.; Mueller, J.; Mueller, R. S. P.; Mueller, T.; Muenstermann, D.; Mullen, P.; Mullier, G. A.; Munoz Sanchez, F. J.; Murillo Quijada, J. A.; Murray, W. J.; Musheghyan, H.; Muškinja, M.; Myagkov, A. G.; Myska, M.; Nachman, B. P.; Nackenhorst, O.; Nagai, K.; Nagai, R.; Nagano, K.; Nagasaka, Y.; Nagata, K.; Nagel, M.; Nagy, E.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Namasivayam, H.; Naranjo Garcia, R. F.; Narayan, R.; Narrias Villar, D. I.; Naryshkin, I.; Naumann, T.; Navarro, G.; Nayyar, R.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Negri, A.; Negrini, M.; Nektarijevic, S.; Nellist, C.; Nelson, A.; Nemecek, S.; Nemethy, P.; Nepomuceno, A. A.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Neves, R. M.; Nevski, P.; Newman, P. R.; Nguyen, D. H.; Nguyen Manh, T.; Nickerson, R. B.; Nicolaidou, R.; Nielsen, J.; Nikiforov, A.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, J. K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nisius, R.; Nobe, T.; Nomachi, M.; Nomidis, I.; Nooney, T.; Norberg, S.; Nordberg, M.; Norjoharuddeen, N.; Novgorodova, O.; Nowak, S.; Nozaki, M.; Nozka, L.; Ntekas, K.; Nurse, E.; Nuti, F.; O'grady, F.; O'Neil, D. C.; O'Rourke, A. A.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Obermann, T.; Ocariz, J.; Ochi, A.; Ochoa, I.; Ochoa-Ricoux, J. P.; Oda, S.; Odaka, S.; Ogren, H.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohman, H.; Oide, H.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Oleiro Seabra, L. F.; Olivares Pino, S. A.; Oliveira Damazio, D.; Olszewski, A.; Olszowska, J.; Onofre, A.; Onogi, K.; Onyisi, P. U. E.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlando, N.; Orr, R. S.; Osculati, B.; Ospanov, R.; Garzon, G. Otero y.; Otono, H.; Ouchrif, M.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Owen, M.; Owen, R. E.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pacheco Pages, A.; Pacheco Rodriguez, L.; Padilla Aranda, C.; Pagáčová, M.; Pagan Griso, S.; Paige, F.; Pais, P.; Pajchel, K.; Palacino, G.; Palazzo, S.; Palestini, S.; Palka, M.; Pallin, D.; Panagiotopoulou, E. St.; Pandini, C. E.; Panduro Vazquez, J. G.; Pani, P.; Panitkin, S.; Pantea, D.; Paolozzi, L.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Paredes Hernandez, D.; Parker, A. J.; Parker, M. A.; Parker, K. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pascuzzi, V. R.; Pasqualucci, E.; Passaggio, S.; Pastore, Fr.; Pásztor, G.; Pataraia, S.; Pater, J. R.; Pauly, T.; Pearce, J.; Pearson, B.; Pedersen, L. E.; Pedersen, M.; Pedraza Lopez, S.; Pedro, R.; Peleganchuk, S. V.; Penc, O.; Peng, C.; Peng, H.; Penwell, J.; Peralva, B. S.; Perego, M. M.; Perepelitsa, D. V.; Perez Codina, E.; Perini, L.; Pernegger, H.; Perrella, S.; Peschke, R.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petroff, P.; Petrolo, E.; Petrov, M.; Petrucci, F.; Pettersson, N. E.; Peyaud, A.; Pezoa, R.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Piccaro, E.; Piccinini, M.; Pickering, M. A.; Piegaia, R.; Pilcher, J. E.; Pilkington, A. D.; Pin, A. W. J.; Pinamonti, M.; Pinfold, J. L.; Pingel, A.; Pires, S.; Pirumov, H.; Pitt, M.; Plazak, L.; Pleier, M.-A.; Pleskot, V.; Plotnikova, E.; Plucinski, P.; Pluth, D.; Poettgen, R.; Poggioli, L.; Pohl, D.; Polesello, G.; Poley, A.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Pommès, K.; Pontecorvo, L.; Pope, B. G.; Popeneciu, G. A.; Poppleton, A.; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Potter, C. T.; Poulard, G.; Poveda, J.; Pozdnyakov, V.; Pozo Astigarraga, M. E.; Pralavorio, P.; Pranko, A.; Prell, S.; Price, D.; Price, L. E.; Primavera, M.; Prince, S.; Prokofiev, K.; Prokoshin, F.; Protopopescu, S.; Proudfoot, J.; Przybycien, M.; Puddu, D.; Purohit, M.; Puzo, P.; Qian, J.; Qin, G.; Qin, Y.; Quadt, A.; Quayle, W. B.; Queitsch-Maitland, M.; Quilty, D.; Raddum, S.; Radeka, V.; Radescu, V.; Radhakrishnan, S. K.; Radloff, P.; Rados, P.; Ragusa, F.; Rahal, G.; Raine, J. A.; Rajagopalan, S.; Rammensee, M.; Rangel-Smith, C.; Ratti, M. G.; Rauscher, F.; Rave, S.; Ravenscroft, T.; Ravinovich, I.; Raymond, M.; Read, A. L.; Readioff, N. P.; Reale, M.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Rehnisch, L.; Reichert, J.; Reisin, H.; Rembser, C.; Ren, H.; Rescigno, M.; Resconi, S.; Rezanova, O. L.; Reznicek, P.; Rezvani, R.; Richter, R.; Richter, S.; Richter-Was, E.; Ricken, O.; Ridel, M.; Rieck, P.; Riegel, C. J.; Rieger, J.; Rifki, O.; Rijssenbeek, M.; Rimoldi, A.; Rimoldi, M.; Rinaldi, L.; Ristić, B.; Ritsch, E.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Rizzi, C.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Roda, C.; Rodina, Y.; Rodriguez Perez, A.; Rodriguez Rodriguez, D.; Roe, S.; Rogan, C. S.; RØhne, O.; Romaniouk, A.; Romano, M.; Romano Saez, S. M.; Romero Adam, E.; Rompotis, N.; Ronzani, M.; Roos, L.; Ros, E.; Rosati, S.; Rosbach, K.; Rose, P.; Rosenthal, O.; Rosien, N.-A.; Rossetti, V.; Rossi, E.; Rossi, L. P.; Rosten, J. H. N.; Rosten, R.; Rotaru, M.; Roth, I.; Rothberg, J.; Rousseau, D.; Royon, C. R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rudolph, M. S.; Rühr, F.; Ruiz-Martinez, A.; Rurikova, Z.; Rusakovich, N. A.; Ruschke, A.; Russell, H. L.; Rutherfoord, J. P.; Ruthmann, N.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryu, S.; Ryzhov, A.; Rzehorz, G. F.; Saavedra, A. F.; Sabato, G.; Sacerdoti, S.; Sadrozinski, H. F.-W.; Sadykov, R.; Safai Tehrani, F.; Saha, P.; Sahinsoy, M.; Saimpert, M.; Saito, T.; Sakamoto, H.; Sakurai, Y.; Salamanna, G.; Salamon, A.; Salazar Loyola, J. E.; Salek, D.; Sales De Bruin, P. H.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sammel, D.; Sampsonidis, D.; Sanchez, A.; Sánchez, J.; Sanchez Martinez, V.; Sandaker, H.; Sandbach, R. L.; Sander, H. G.; Sandhoff, M.; Sandoval, C.; Sandstroem, R.; Sankey, D. P. C.; Sannino, M.; Sansoni, A.; Santoni, C.; Santonico, R.; Santos, H.; Santoyo Castillo, I.; Sapp, K.; Sapronov, A.; Saraiva, J. G.; Sarrazin, B.; Sasaki, O.; Sasaki, Y.; Sato, K.; Sauvage, G.; Sauvan, E.; Savage, G.; Savard, P.; Savic, N.; Sawyer, C.; Sawyer, L.; Saxon, J.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Scarcella, M.; Scarfone, V.; Schaarschmidt, J.; Schacht, P.; Schachtner, B. M.; Schaefer, D.; Schaefer, L.; Schaefer, R.; Schaeffer, J.; Schaepe, S.; Schaetzel, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Scharf, V.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Schiavi, C.; Schier, S.; Schillo, C.; Schioppa, M.; Schlenker, S.; Schmidt-Sommerfeld, K. R.; Schmieden, K.; Schmitt, C.; Schmitt, S.; Schmitz, S.; Schneider, B.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schoenrock, B. D.; Schopf, E.; Schott, M.; Schovancova, J.; Schramm, S.; Schreyer, M.; Schuh, N.; Schulte, A.; Schultens, M. J.; Schultz-Coulon, H.-C.; Schulz, H.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwartzman, A.; Schwarz, T. A.; Schweiger, H.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Schwindt, T.; Sciolla, G.; Scuri, F.; Scutti, F.; Searcy, J.; Seema, P.; Seidel, S. C.; Seiden, A.; Seifert, F.; Seixas, J. M.; Sekhniaidze, G.; Sekhon, K.; Sekula, S. J.; Seliverstov, D. M.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Serkin, L.; Sessa, M.; Seuster, R.; Severini, H.; Sfiligoj, T.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shaikh, N. W.; Shan, L. Y.; Shang, R.; Shank, J. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Shaw, S. M.; Shcherbakova, A.; Shehu, C. Y.; Sherwood, P.; Shi, L.; Shimizu, S.; Shimmin, C. O.; Shimojima, M.; Shiyakova, M.; Shmeleva, A.; Shoaleh Saadi, D.; Shochet, M. J.; Shojaii, S.; Shrestha, S.; Shulga, E.; Shupe, M. A.; Sicho, P.; Sickles, A. M.; Sidebo, P. E.; Sidiropoulou, O.; Sidorov, D.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silva, J.; Silverstein, S. B.; Simak, V.; Simic, Lj.; Simion, S.; Simioni, E.; Simmons, B.; Simon, D.; Simon, M.; Sinervo, P.; Sinev, N. B.; Sioli, M.; Siragusa, G.; Sivoklokov, S. Yu.; Sjölin, J.; Skinner, M. B.; Skottowe, H. P.; Skubic, P.; Slater, M.; Slavicek, T.; Slawinska, M.; Sliwa, K.; Slovak, R.; Smakhtin, V.; Smart, B. H.; Smestad, L.; Smiesko, J.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, M. N. K.; Smith, R. W.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snyder, S.; Sobie, R.; Socher, F.; Soffer, A.; Soh, D. A.; Sokhrannyi, G.; Solans Sanchez, C. A.; Solar, M.; Soldatov, E. Yu.; Soldevila, U.; Solodkov, A. A.; Soloshenko, A.; Solovyanov, O. V.; Solovyev, V.; Sommer, P.; Son, H.; Song, H. Y.; Sood, A.; Sopczak, A.; Sopko, V.; Sorin, V.; Sosa, D.; Sotiropoulou, C. L.; Soualah, R.; Soukharev, A. M.; South, D.; Sowden, B. C.; Spagnolo, S.; Spalla, M.; Spangenberg, M.; Spanò, F.; Sperlich, D.; Spettel, F.; Spighi, R.; Spigo, G.; Spiller, L. A.; Spousta, M.; St. Denis, R. D.; Stabile, A.; Stamen, R.; Stamm, S.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stanescu-Bellu, M.; Stanitzki, M. M.; Stapnes, S.; Starchenko, E. A.; Stark, G. H.; Stark, J.; Staroba, P.; Starovoitov, P.; Stärz, S.; Staszewski, R.; Steinberg, P.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stewart, G. A.; Stillings, J. A.; Stockton, M. C.; Stoebe, M.; Stoicea, G.; Stolte, P.; Stonjek, S.; Stradling, A. R.; Straessner, A.; Stramaglia, M. E.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Strubig, A.; Stucci, S. A.; Stugu, B.; Styles, N. A.; Su, D.; Su, J.; Suchek, S.; Sugaya, Y.; Suk, M.; Sulin, V. V.; Sultansoy, S.; Sumida, T.; Sun, S.; Sun, X.; Sundermann, J. E.; Suruliz, K.; Susinno, G.; Sutton, M. R.; Suzuki, S.; Svatos, M.; Swiatlowski, M.; Sykora, I.; Sykora, T.; Ta, D.; Taccini, C.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Taiblum, N.; Takai, H.; Takashima, R.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tan, K. G.; Tanaka, J.; Tanaka, M.; Tanaka, R.; Tanaka, S.; Tannenwald, B. B.; Tapia Araya, S.; Tapprogge, S.; Tarem, S.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Tavares Delgado, A.; Tayalati, Y.; Taylor, A. C.; Taylor, G. N.; Taylor, P. T. E.; Taylor, W.; Teischinger, F. A.; Teixeira-Dias, P.; Temming, K. K.; Temple, D.; Ten Kate, H.; Teng, P. K.; Teoh, J. J.; Tepel, F.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Theveneaux-Pelzer, T.; Thomas, J. P.; Thomas-Wilsker, J.; Thompson, E. N.; Thompson, P. D.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Thomson, M.; Tibbetts, M. J.; Ticse Torres, R. E.; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tipton, P.; Tisserant, S.; Todome, K.; Todorov, T.; Todorova-Nova, S.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tolley, E.; Tomlinson, L.; Tomoto, M.; Tompkins, L.; Toms, K.; Tong, B.; Torrence, E.; Torres, H.; Torró Pastor, E.; Toth, J.; Touchard, F.; Tovey, D. R.; Trefzger, T.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Trischuk, W.; Trocmé, B.; Trofymov, A.; Troncon, C.; Trottier-McDonald, M.; Trovatelli, M.; Truong, L.; Trzebinski, M.; Trzupek, A.; Tseng, J. C.-L.; Tsiareshka, P. V.; Tsipolitis, G.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsui, K. M.; Tsukerman, I. I.; Tsulaia, V.; Tsuno, S.; Tsybychev, D.; Tu, Y.; Tudorache, A.; Tudorache, V.; Tuna, A. N.; Tupputi, S. A.; Turchikhin, S.; Turecek, D.; Turgeman, D.; Turra, R.; Turvey, A. J.; Tuts, P. M.; Tyndel, M.; Ucchielli, G.; Ueda, I.; Ughetto, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Unverdorben, C.; Urban, J.; Urquijo, P.; Urrejola, P.; Usai, G.; Usanova, A.; Vacavant, L.; Vacek, V.; Vachon, B.; Valderanis, C.; Valdes Santurio, E.; Valencic, N.; Valentinetti, S.; Valero, A.; Valery, L.; Valkar, S.; Valls Ferrer, J. A.; Van Den Wollenberg, W.; Van Der Deijl, P. C.; van der Graaf, H.; van Eldik, N.; van Gemmeren, P.; Van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vanguri, R.; Vaniachine, A.; Vankov, P.; Vardanyan, G.; Vari, R.; Varnes, E. W.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vasquez, J. G.; Vazeille, F.; Vazquez Schroeder, T.; Veatch, J.; Veeraraghavan, V.; Veloce, L. M.; Veloso, F.; Veneziano, S.; Ventura, A.; Venturi, M.; Venturi, N.; Venturini, A.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J. C.; Vest, A.; Vetterli, M. C.; Viazlo, O.; Vichou, I.; Vickey, T.; Boeriu, O. E. Vickey; Viehhauser, G. H. A.; Viel, S.; Vigani, L.; Villa, M.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Vittori, C.; Vivarelli, I.; Vlachos, S.; Vlasak, M.; Vogel, M.; Vokac, P.; Volpi, G.; Volpi, M.; von der Schmitt, H.; von Toerne, E.; Vorobel, V.; Vorobev, K.; Vos, M.; Voss, R.; Vossebeld, J. H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vuillermet, R.; Vukotic, I.; Vykydal, Z.; Wagner, P.; Wagner, W.; Wahlberg, H.; Wahrmund, S.; Wakabayashi, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wallangen, V.; Wang, C.; Wang, C.; Wang, F.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, K.; Wang, R.; Wang, S. M.; Wang, T.; Wang, T.; Wang, W.; Wang, X.; Wanotayaroj, C.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Washbrook, A.; Watkins, P. M.; Watson, A. T.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, B. M.; Webb, S.; Weber, M. S.; Weber, S. W.; Webster, J. S.; Weidberg, A. R.; Weinert, B.; Weingarten, J.; Weiser, C.; Weits, H.; Wells, P. S.; Wenaus, T.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, M. D.; Werner, P.; Wessels, M.; Wetter, J.; Whalen, K.; Whallon, N. L.; Wharton, A. M.; White, A.; White, M. J.; White, R.; Whiteson, D.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wildauer, A.; Wilk, F.; Wilkens, H. G.; Williams, H. H.; Williams, S.; Willis, C.; Willocq, S.; Wilson, J. A.; Wingerter-Seez, I.; Winklmeier, F.; Winston, O. J.; Winter, B. T.; Wittgen, M.; Wittkowski, J.; Wolf, T. M. H.; Wolter, M. W.; Wolters, H.; Worm, S. D.; Wosiek, B. K.; Wotschack, J.; Woudstra, M. J.; Wozniak, K. W.; Wu, M.; Wu, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xu, D.; Xu, L.; Yabsley, B.; Yacoob, S.; Yamaguchi, D.; Yamaguchi, Y.; Yamamoto, A.; Yamamoto, S.; Yamanaka, T.; Yamauchi, K.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, Y.; Yang, Z.; Yao, W.-M.; Yap, Y. C.; Yasu, Y.; Yatsenko, E.; Yau Wong, K. H.; Ye, J.; Ye, S.; Yeletskikh, I.; Yen, A. L.; Yildirim, E.; Yorita, K.; Yoshida, R.; Yoshihara, K.; Young, C.; Young, C. J. S.; Youssef, S.; Yu, D. R.; Yu, J.; Yu, J. M.; Yu, J.; Yuan, L.; Yuen, S. P. Y.; Yusuff, I.; Zabinski, B.; Zaidan, R.; Zaitsev, A. M.; Zakharchuk, N.; Zalieckas, J.; Zaman, A.; Zambito, S.; Zanello, L.; Zanzi, D.; Zeitnitz, C.; Zeman, M.; Zemla, A.; Zeng, J. C.; Zeng, Q.; Zengel, K.; Zenin, O.; Ženiš, T.; Zerwas, D.; Zhang, D.; Zhang, F.; Zhang, G.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, R.; Zhang, R.; Zhang, X.; Zhang, Z.; Zhao, X.; Zhao, Y.; Zhao, Z.; Zhemchugov, A.; Zhong, J.; Zhou, B.; Zhou, C.; Zhou, L.; Zhou, L.; Zhou, M.; Zhou, N.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zhukov, K.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, C.; Zimmermann, S.; Zinonos, Z.; Zinser, M.; Ziolkowski, M.; Živković, L.; Zobernig, G.; Zoccoli, A.; zur Nedden, M.; Zwalinski, L.

    2016-11-01

    Knowledge of the material in the ATLAS inner tracking detector is crucial in understanding the reconstruction of charged-particle tracks, the performance of algorithms that identify jets containing b-hadrons and is also essential to reduce background in searches for exotic particles that can decay within the inner detector volume. Interactions of primary hadrons produced in pp collisions with the material in the inner detector are used to map the location and amount of this material. The hadronic interactions of primary particles may result in secondary vertices, which in this analysis are reconstructed by an inclusive vertex-finding algorithm. Data were collected using minimum-bias triggers by the ATLAS detector operating at the LHC during 2010 at centre-of-mass energy √s = 7 TeV, and correspond to an integrated luminosity of 19 nb-1. Kinematic properties of these secondary vertices are used to study the validity of the modelling of hadronic interactions in simulation. Secondary-vertex yields are compared between data and simulation over a volume of about 0.7 m3 around the interaction point, and agreement is found within overall uncertainties.

  7. PSO Algorithm Particle Filters for Improving the Performance of Lane Detection and Tracking Systems in Difficult Roads

    PubMed Central

    Cheng, Wen-Chang

    2012-01-01

    In this paper we propose a robust lane detection and tracking method by combining particle filters with the particle swarm optimization method. This method mainly uses the particle filters to detect and track the local optimum of the lane model in the input image and then seeks the global optimal solution of the lane model by a particle swarm optimization method. The particle filter can effectively complete lane detection and tracking in complicated or variable lane environments. However, the result obtained is usually a local optimal system status rather than the global optimal system status. Thus, the particle swarm optimization method is used to further refine the global optimal system status in all system statuses. Since the particle swarm optimization method is a global optimization algorithm based on iterative computing, it can find the global optimal lane model by simulating the food finding way of fish school or insects under the mutual cooperation of all particles. In verification testing, the test environments included highways and ordinary roads as well as straight and curved lanes, uphill and downhill lanes, lane changes, etc. Our proposed method can complete the lane detection and tracking more accurately and effectively then existing options. PMID:23235453

  8. [Analysis of visible extinction spectrum of particle system and selection of optimal wavelength].

    PubMed

    Sun, Xiao-gang; Tang, Hong; Yuan, Gui-bin

    2008-09-01

    In the total light scattering particle sizing technique, the extinction spectrum of particle system contains some information about the particle size and refractive index. The visible extinction spectra of the common monomodal and biomodal R-R particle size distribution were computed, and the variation in the visible extinction spectrum with the particle size and refractive index was analyzed. The corresponding wavelengths were selected as the measurement wavelengths at which the second order differential extinction spectrum was discontinuous. Furthermore, the minimum and the maximum wavelengths in the visible region were also selected as the measurement wavelengths. The genetic algorithm was used as the inversion method under the dependent model The computer simulation and experiments illustrate that it is feasible to make an analysis of the extinction spectrum and use this selection method of the optimal wavelength in the total light scattering particle sizing. The rough contour of the particle size distribution can be determined after the analysis of visible extinction spectrum, so the search range of the particle size parameter is reduced in the optimal algorithm, and then a more accurate inversion result can be obtained using the selection method. The inversion results of monomodal and biomodal distribution are all still satisfactory when 1% stochastic noise is put in the transmission extinction measurement values.

  9. Brownian Dynamics simulations of model colloids in channel geometries and external fields

    NASA Astrophysics Data System (ADS)

    Siems, Ullrich; Nielaba, Peter

    2018-04-01

    We review the results of Brownian Dynamics simulations of colloidal particles in external fields confined in channels. Super-paramagnetic Brownian particles are well suited two- dimensional model systems for a variety of problems on different length scales, ranging from pedestrian walking through a bottleneck to ions passing ion-channels in living cells. In such systems confinement into channels can have a great influence on the diffusion and transport properties. Especially we will discuss the crossover from single file diffusion in a narrow channel to the diffusion in the extended two-dimensional system. Therefore a new algorithm for computing the mean square displacement (MSD) on logarithmic time scales is presented. In a different study interacting colloidal particles were dragged over a washboard potential and are additionally confined in a two-dimensional micro-channel. In this system kink and anti-kink solitons determine the depinning process of the particles from the periodic potential.

  10. Particle-in-cell simulations with charge-conserving current deposition on graphic processing units

    NASA Astrophysics Data System (ADS)

    Ren, Chuang; Kong, Xianglong; Huang, Michael; Decyk, Viktor; Mori, Warren

    2011-10-01

    Recently using CUDA, we have developed an electromagnetic Particle-in-Cell (PIC) code with charge-conserving current deposition for Nvidia graphic processing units (GPU's) (Kong et al., Journal of Computational Physics 230, 1676 (2011). On a Tesla M2050 (Fermi) card, the GPU PIC code can achieve a one-particle-step process time of 1.2 - 3.2 ns in 2D and 2.3 - 7.2 ns in 3D, depending on plasma temperatures. In this talk we will discuss novel algorithms for GPU-PIC including charge-conserving current deposition scheme with few branching and parallel particle sorting. These algorithms have made efficient use of the GPU shared memory. We will also discuss how to replace the computation kernels of existing parallel CPU codes while keeping their parallel structures. This work was supported by U.S. Department of Energy under Grant Nos. DE-FG02-06ER54879 and DE-FC02-04ER54789 and by NSF under Grant Nos. PHY-0903797 and CCF-0747324.

  11. Perfectly matched layers in a divergence preserving ADI scheme for electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kraus, C.; ETH Zurich, Chair of Computational Science, 8092 Zuerich; Adelmann, A., E-mail: andreas.adelmann@psi.ch

    For numerical simulations of highly relativistic and transversely accelerated charged particles including radiation fast algorithms are needed. While the radiation in particle accelerators has wavelengths in the order of 100 {mu}m the computational domain has dimensions roughly five orders of magnitude larger resulting in very large mesh sizes. The particles are confined to a small area of this domain only. To resolve the smallest scales close to the particles subgrids are envisioned. For reasons of stability the alternating direction implicit (ADI) scheme by Smithe et al. [D.N. Smithe, J.R. Cary, J.A. Carlsson, Divergence preservation in the ADI algorithms for electromagnetics,more » J. Comput. Phys. 228 (2009) 7289-7299] for Maxwell equations has been adopted. At the boundary of the domain absorbing boundary conditions have to be employed to prevent reflection of the radiation. In this paper we show how the divergence preserving ADI scheme has to be formulated in perfectly matched layers (PML) and compare the performance in several scenarios.« less

  12. On the use of reverse Brownian motion to accelerate hybrid simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakarji, Joseph; Tartakovsky, Daniel M., E-mail: tartakovsky@stanford.edu

    Multiscale and multiphysics simulations are two rapidly developing fields of scientific computing. Efficient coupling of continuum (deterministic or stochastic) constitutive solvers with their discrete (stochastic, particle-based) counterparts is a common challenge in both kinds of simulations. We focus on interfacial, tightly coupled simulations of diffusion that combine continuum and particle-based solvers. The latter employs the reverse Brownian motion (rBm), a Monte Carlo approach that allows one to enforce inhomogeneous Dirichlet, Neumann, or Robin boundary conditions and is trivially parallelizable. We discuss numerical approaches for improving the accuracy of rBm in the presence of inhomogeneous Neumann boundary conditions and alternative strategiesmore » for coupling the rBm solver with its continuum counterpart. Numerical experiments are used to investigate the convergence, stability, and computational efficiency of the proposed hybrid algorithm.« less

  13. Research on bulbous bow optimization based on the improved PSO algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Sheng-long; Zhang, Bao-ji; Tezdogan, Tahsin; Xu, Le-ping; Lai, Yu-yang

    2017-08-01

    In order to reduce the total resistance of a hull, an optimization framework for the bulbous bow optimization was presented. The total resistance in calm water was selected as the objective function, and the overset mesh technique was used for mesh generation. RANS method was used to calculate the total resistance of the hull. In order to improve the efficiency and smoothness of the geometric reconstruction, the arbitrary shape deformation (ASD) technique was introduced to change the shape of the bulbous bow. To improve the global search ability of the particle swarm optimization (PSO) algorithm, an improved particle swarm optimization (IPSO) algorithm was proposed to set up the optimization model. After a series of optimization analyses, the optimal hull form was found. It can be concluded that the simulation based design framework built in this paper is a promising method for bulbous bow optimization.

  14. RBF neural network based PI pitch controller for a class of 5-MW wind turbines using particle swarm optimization algorithm.

    PubMed

    Poultangari, Iman; Shahnazi, Reza; Sheikhan, Mansour

    2012-09-01

    In order to control the pitch angle of blades in wind turbines, commonly the proportional and integral (PI) controller due to its simplicity and industrial usability is employed. The neural networks and evolutionary algorithms are tools that provide a suitable ground to determine the optimal PI gains. In this paper, a radial basis function (RBF) neural network based PI controller is proposed for collective pitch control (CPC) of a 5-MW wind turbine. In order to provide an optimal dataset to train the RBF neural network, particle swarm optimization (PSO) evolutionary algorithm is used. The proposed method does not need the complexities, nonlinearities and uncertainties of the system under control. The simulation results show that the proposed controller has satisfactory performance. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Particle swarm optimization algorithm based parameters estimation and control of epileptiform spikes in a neural mass model

    NASA Astrophysics Data System (ADS)

    Shan, Bonan; Wang, Jiang; Deng, Bin; Wei, Xile; Yu, Haitao; Zhang, Zhen; Li, Huiyan

    2016-07-01

    This paper proposes an epilepsy detection and closed-loop control strategy based on Particle Swarm Optimization (PSO) algorithm. The proposed strategy can effectively suppress the epileptic spikes in neural mass models, where the epileptiform spikes are recognized as the biomarkers of transitions from the normal (interictal) activity to the seizure (ictal) activity. In addition, the PSO algorithm shows capabilities of accurate estimation for the time evolution of key model parameters and practical detection for all the epileptic spikes. The estimation effects of unmeasurable parameters are improved significantly compared with unscented Kalman filter. When the estimated excitatory-inhibitory ratio exceeds a threshold value, the epileptiform spikes can be inhibited immediately by adopting the proportion-integration controller. Besides, numerical simulations are carried out to illustrate the effectiveness of the proposed method as well as the potential value for the model-based early seizure detection and closed-loop control treatment design.

  16. Petascale self-consistent electromagnetic computations using scalable and accurate algorithms for complex structures

    NASA Astrophysics Data System (ADS)

    Cary, John R.; Abell, D.; Amundson, J.; Bruhwiler, D. L.; Busby, R.; Carlsson, J. A.; Dimitrov, D. A.; Kashdan, E.; Messmer, P.; Nieter, C.; Smithe, D. N.; Spentzouris, P.; Stoltz, P.; Trines, R. M.; Wang, H.; Werner, G. R.

    2006-09-01

    As the size and cost of particle accelerators escalate, high-performance computing plays an increasingly important role; optimization through accurate, detailed computermodeling increases performance and reduces costs. But consequently, computer simulations face enormous challenges. Early approximation methods, such as expansions in distance from the design orbit, were unable to supply detailed accurate results, such as in the computation of wake fields in complex cavities. Since the advent of message-passing supercomputers with thousands of processors, earlier approximations are no longer necessary, and it is now possible to compute wake fields, the effects of dampers, and self-consistent dynamics in cavities accurately. In this environment, the focus has shifted towards the development and implementation of algorithms that scale to large numbers of processors. So-called charge-conserving algorithms evolve the electromagnetic fields without the need for any global solves (which are difficult to scale up to many processors). Using cut-cell (or embedded) boundaries, these algorithms can simulate the fields in complex accelerator cavities with curved walls. New implicit algorithms, which are stable for any time-step, conserve charge as well, allowing faster simulation of structures with details small compared to the characteristic wavelength. These algorithmic and computational advances have been implemented in the VORPAL7 Framework, a flexible, object-oriented, massively parallel computational application that allows run-time assembly of algorithms and objects, thus composing an application on the fly.

  17. Partial entropic stabilization of lattice Boltzmann magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Flint, Christopher; Vahala, George

    2018-01-01

    The entropic lattice Boltzmann algorithm of Karlin et al. [Phys. Rev. E 90, 031302 (2014), 10.1103/PhysRevE.90.031302] is partially extended to magnetohydrodynamics, based on the Dellar model of introducing a vector distribution for the magnetic field. This entropic ansatz is now applied only to the scalar particle distribution function so as to permit the many problems entailing magnetic field reversal. A 9-bit lattice is employed for both particle and magnetic distributions for our two-dimensional simulations. The entropic ansatz is benchmarked against our earlier multiple relaxation lattice-Boltzmann model for the Kelvin-Helmholtz instability in a magnetized jet. Other two-dimensional simulations are performed and compared to results determined by more standard direct algorithms: in particular the switch over between the Kelvin-Helmholtz or tearing mode instability of Chen et al. [J. Geophys. Res.: Space Phys. 102, 151 (1997), 10.1029/96JA03144], and the generalized Orszag-Tang vortex model of Biskamp-Welter [Phys. Fluids B 1, 1964 (1989), 10.1063/1.859060]. Very good results are achieved.

  18. Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.

    PubMed

    Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai

    2008-03-15

    A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  19. Fully implicit Particle-in-cell algorithms for multiscale plasma simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis

    The outline of the paper is as follows: Particle-in-cell (PIC) methods for fully ionized collisionless plasmas, explicit vs. implicit PIC, 1D ES implicit PIC (charge and energy conservation, moment-based acceleration), and generalization to Multi-D EM PIC: Vlasov-Darwin model (review and motivation for Darwin model, conservation properties (energy, charge, and canonical momenta), and numerical benchmarks). The author demonstrates a fully implicit, fully nonlinear, multidimensional PIC formulation that features exact local charge conservation (via a novel particle mover strategy), exact global energy conservation (no particle self-heating or self-cooling), adaptive particle orbit integrator to control errors in momentum conservation, and canonical momenta (EM-PICmore » only, reduced dimensionality). The approach is free of numerical instabilities: ω peΔt >> 1, and Δx >> λ D. It requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant CPU gains (vs explicit PIC) have been demonstrated. The method has much potential for efficiency gains vs. explicit in long-time-scale applications. Moment-based acceleration is effective in minimizing N FE, leading to an optimal algorithm.« less

  20. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization

    PubMed Central

    Wang, Jie-sheng; Li, Shu-xia; Song, Jiang-di

    2015-01-01

    In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird's nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO) algorithm and artificial bee colony (ABC) algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy. PMID:26366164

  1. The Study of Intelligent Vehicle Navigation Path Based on Behavior Coordination of Particle Swarm.

    PubMed

    Han, Gaining; Fu, Weiping; Wang, Wen

    2016-01-01

    In the behavior dynamics model, behavior competition leads to the shock problem of the intelligent vehicle navigation path, because of the simultaneous occurrence of the time-variant target behavior and obstacle avoidance behavior. Considering the safety and real-time of intelligent vehicle, the particle swarm optimization (PSO) algorithm is proposed to solve these problems for the optimization of weight coefficients of the heading angle and the path velocity. Firstly, according to the behavior dynamics model, the fitness function is defined concerning the intelligent vehicle driving characteristics, the distance between intelligent vehicle and obstacle, and distance of intelligent vehicle and target. Secondly, behavior coordination parameters that minimize the fitness function are obtained by particle swarm optimization algorithms. Finally, the simulation results show that the optimization method and its fitness function can improve the perturbations of the vehicle planning path and real-time and reliability.

  2. Study on loading path optimization of internal high pressure forming process

    NASA Astrophysics Data System (ADS)

    Jiang, Shufeng; Zhu, Hengda; Gao, Fusheng

    2017-09-01

    In the process of internal high pressure forming, there is no formula to describe the process parameters and forming results. The article use numerical simulation to obtain several input parameters and corresponding output result, use the BP neural network to found their mapping relationship, and with weighted summing method make each evaluating parameters to set up a formula which can evaluate quality. Then put the training BP neural network into the particle swarm optimization, and take the evaluating formula of the quality as adapting formula of particle swarm optimization, finally do the optimization and research at the range of each parameters. The results show that the parameters obtained by the BP neural network algorithm and the particle swarm optimization algorithm can meet the practical requirements. The method can solve the optimization of the process parameters in the internal high pressure forming process.

  3. The Study of Intelligent Vehicle Navigation Path Based on Behavior Coordination of Particle Swarm

    PubMed Central

    Han, Gaining; Fu, Weiping; Wang, Wen

    2016-01-01

    In the behavior dynamics model, behavior competition leads to the shock problem of the intelligent vehicle navigation path, because of the simultaneous occurrence of the time-variant target behavior and obstacle avoidance behavior. Considering the safety and real-time of intelligent vehicle, the particle swarm optimization (PSO) algorithm is proposed to solve these problems for the optimization of weight coefficients of the heading angle and the path velocity. Firstly, according to the behavior dynamics model, the fitness function is defined concerning the intelligent vehicle driving characteristics, the distance between intelligent vehicle and obstacle, and distance of intelligent vehicle and target. Secondly, behavior coordination parameters that minimize the fitness function are obtained by particle swarm optimization algorithms. Finally, the simulation results show that the optimization method and its fitness function can improve the perturbations of the vehicle planning path and real-time and reliability. PMID:26880881

  4. Directly reconstructing principal components of heterogeneous particles from cryo-EM images.

    PubMed

    Tagare, Hemant D; Kucukelbir, Alp; Sigworth, Fred J; Wang, Hongwei; Rao, Murali

    2015-08-01

    Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the posterior likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the influenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. A Numerical Method for the Simulation of Skew Brownian Motion and its Application to Diffusive Shock Acceleration of Charged Particles

    NASA Astrophysics Data System (ADS)

    McEvoy, Erica L.

    Stochastic differential equations are becoming a popular tool for modeling the transport and acceleration of cosmic rays in the heliosphere. In diffusive shock acceleration, cosmic rays diffuse across a region of discontinuity where the up- stream diffusion coefficient abruptly changes to the downstream value. Because the method of stochastic integration has not yet been developed to handle these types of discontinuities, I utilize methods and ideas from probability theory to develop a conceptual framework for the treatment of such discontinuities. Using this framework, I then produce some simple numerical algorithms that allow one to incorporate and simulate a variety of discontinuities (or boundary conditions) using stochastic integration. These algorithms were then modified to create a new algorithm which incorporates the discontinuous change in diffusion coefficient found in shock acceleration (known as Skew Brownian Motion). The originality of this algorithm lies in the fact that it is the first of its kind to be statistically exact, so that one obtains accuracy without the use of approximations (other than the machine precision error). I then apply this algorithm to model the problem of diffusive shock acceleration, modifying it to incorporate the additional effect of the discontinuous flow speed profile found at the shock. A steady-state solution is obtained that accurately simulates this phenomenon. This result represents a significant improvement over previous approximation algorithms, and will be useful for the simulation of discontinuous diffusion processes in other fields, such as biology and finance.

  6. Determining residual reduction algorithm kinematic tracking weights for a sidestep cut via numerical optimization.

    PubMed

    Samaan, Michael A; Weinhandl, Joshua T; Bawab, Sebastian Y; Ringleb, Stacie I

    2016-12-01

    Musculoskeletal modeling allows for the determination of various parameters during dynamic maneuvers by using in vivo kinematic and ground reaction force (GRF) data as inputs. Differences between experimental and model marker data and inconsistencies in the GRFs applied to these musculoskeletal models may not produce accurate simulations. Therefore, residual forces and moments are applied to these models in order to reduce these differences. Numerical optimization techniques can be used to determine optimal tracking weights of each degree of freedom of a musculoskeletal model in order to reduce differences between the experimental and model marker data as well as residual forces and moments. In this study, the particle swarm optimization (PSO) and simplex simulated annealing (SIMPSA) algorithms were used to determine optimal tracking weights for the simulation of a sidestep cut. The PSO and SIMPSA algorithms were able to produce model kinematics that were within 1.4° of experimental kinematics with residual forces and moments of less than 10 N and 18 Nm, respectively. The PSO algorithm was able to replicate the experimental kinematic data more closely and produce more dynamically consistent kinematic data for a sidestep cut compared to the SIMPSA algorithm. Future studies should use external optimization routines to determine dynamically consistent kinematic data and report the differences between experimental and model data for these musculoskeletal simulations.

  7. Dusty gas with one fluid in smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Laibe, Guillaume; Price, Daniel J.

    2014-05-01

    In a companion paper we have shown how the equations describing gas and dust as two fluids coupled by a drag term can be re-formulated to describe the system as a single-fluid mixture. Here, we present a numerical implementation of the one-fluid dusty gas algorithm using smoothed particle hydrodynamics (SPH). The algorithm preserves the conservation properties of the SPH formalism. In particular, the total gas and dust mass, momentum, angular momentum and energy are all exactly conserved. Shock viscosity and conductivity terms are generalized to handle the two-phase mixture accordingly. The algorithm is benchmarked against a comprehensive suit of problems: DUSTYBOX, DUSTYWAVE, DUSTYSHOCK and DUSTYOSCILL, each of them addressing different properties of the method. We compare the performance of the one-fluid algorithm to the standard two-fluid approach. The one-fluid algorithm is found to solve both of the fundamental limitations of the two-fluid algorithm: it is no longer possible to concentrate dust below the resolution of the gas (they have the same resolution by definition), and the spatial resolution criterion h < csts, required in two-fluid codes to avoid over-damping of kinetic energy, is unnecessary. Implicit time-stepping is straightforward. As a result, the algorithm is up to ten billion times more efficient for 3D simulations of small grains. Additional benefits include the use of half as many particles, a single kernel and fewer SPH interpolations. The only limitation is that it does not capture multi-streaming of dust in the limit of zero coupling, suggesting that in this case a hybrid approach may be required.

  8. Estimation for general birth-death processes.

    PubMed

    Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A

    2014-04-01

    Birth-death processes (BDPs) are continuous-time Markov chains that track the number of "particles" in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution.

  9. Constraining Particle Variation in Lunar Regolith for Simulant Design

    NASA Technical Reports Server (NTRS)

    Schrader, Christian M.; Rickman, Doug; Stoeser, Douglas; Hoelzer, Hans

    2008-01-01

    Simulants are used by the lunar engineering community to develop and test technologies for In Situ Resource Utilization (ISRU), excavation and drilling, and for mitigation of hazards to machinery and human health. Working with the United States Geological Survey (USGS), other NASA centers, private industry and academia, Marshall Space Flight Center (MSFC) is leading NASA s lunar regolith simulant program. There are two main efforts: simulant production and simulant evaluation. This work requires a highly detailed understanding of regolith particle type, size, and shape distribution, and of bulk density. The project has developed Figure of Merit (FoM) algorithms to quantitatively compare these characteristics between two materials. The FoM can be used to compare two lunar regolith samples, regolith to simulant, or two parcels of simulant. In work presented here, we use the FoM algorithm to examine the variance of particle type in Apollo 16 highlands regolith core and surface samples. For this analysis we have used internally consistent particle type data for the 90-150 m fraction of Apollo core 64001/64002 from station 4, core 60009/60010 from station 10, and surface samples from various Apollo 16 stations. We calculate mean modal compositions for each core and for the group of surface samples and quantitatively compare samples of each group to its mean as a measurement of within-group variance; we also calculate an FoM for every sample against the mean composition of 64001/64002. This gives variation with depth at two locations and between Apollo 16 stations. Of the tested groups, core 60009/60010 has the highest internal variance with an average FoM score of 0.76 and core 64001/64002 has the lowest with an average FoM of 0.92. The surface samples have a low but intermediate internal variance with an average FoM of 0.79. FoM s calculated against the 64001/64002 mean reference composition range from 0.79-0.97 for 64001/64002, from 0.41-0.91 for 60009/60010, and from 0.54-0.93 for the surface samples. Six samples fall below 0.70, and they are also the least mature (i.e., have the lowest I(sub s)/FeO). Because agglutinates are the dominant particle type and the agglutinate population increases with sample maturity (I(sub s)/FeO), the maturity of the sample relative to the reference is a prime determinant of the particle type FoM score within these highland samples.

  10. A sharp interface Cartesian grid method for viscous simulation of shocked particle-laden flows

    NASA Astrophysics Data System (ADS)

    Das, Pratik; Sen, Oishik; Jacobs, Gustaaf; Udaykumar, H. S.

    2017-09-01

    A Cartesian grid-based sharp interface method is presented for viscous simulations of shocked particle-laden flows. The moving solid-fluid interfaces are represented using level sets. A moving least-squares reconstruction is developed to apply the no-slip boundary condition at solid-fluid interfaces and to supply viscous stresses to the fluid. The algorithms developed in this paper are benchmarked against similarity solutions for the boundary layer over a fixed flat plate and against numerical solutions for moving interface problems such as shock-induced lift-off of a cylinder in a channel. The framework is extended to 3D and applied to calculate low Reynolds number steady supersonic flow over a sphere. Viscous simulation of the interaction of a particle cloud with an incident planar shock is demonstrated; the average drag on the particles and the vorticity field in the cloud are compared to the inviscid case to elucidate the effects of viscosity on momentum transfer between the particle and fluid phases. The methods developed will be useful for obtaining accurate momentum and heat transfer closure models for macro-scale shocked particulate flow applications such as blast waves and dust explosions.

  11. Retrieval of Polar Stratospheric Cloud Microphysical Properties from Lidar Measurements: Dependence on Particle Shape Assumptions

    NASA Technical Reports Server (NTRS)

    Reichardt, J.; Reichardt, S.; Yang, P.; McGee, T. J.; Bhartia, P. K. (Technical Monitor)

    2001-01-01

    A retrieval algorithm has been developed for the microphysical analysis of polar stratospheric cloud (PSC) optical data obtained using lidar instrumentation. The parameterization scheme of the PSC microphysical properties allows for coexistence of up to three different particle types with size-dependent shapes. The finite difference time domain (FDTD) method has been used to calculate optical properties of particles with maximum dimensions equal to or less than 2 mu m and with shapes that can be considered more representative of PSCs on the scale of individual crystals than the commonly assumed spheroids. Specifically. these are irregular and hexagonal crystals. Selection of the optical parameters that are input to the inversion algorithm is based on a potential data set such as that gathered by two of the lidars on board the NASA DC-8 during the Stratospheric Aerosol and Gas Experiment 0 p (SAGE) Ozone Loss Validation experiment (SOLVE) campaign in winter 1999/2000: the Airborne Raman Ozone and Temperature Lidar (AROTEL) and the NASA Langley Differential Absorption Lidar (DIAL). The 0 microphysical retrieval algorithm has been applied to study how particle shape assumptions affect the inversion of lidar data measured in leewave PSCs. The model simulations show that under the assumption of spheroidal particle shapes, PSC surface and volume density are systematically smaller than the FDTD-based values by, respectively, approximately 10-30% and approximately 5-23%.

  12. An Energy- and Charge-conserving, Implicit, Electrostatic Particle-in-Cell Algorithm in curvilinear geometry

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.; Barnes, D. C.

    2012-03-01

    A recent proof-of-principle study proposes an energy- and charge-conserving, fully implicit particle-in-cell algorithm in one dimension [1], which is able to use timesteps comparable to the dynamical timescale of interest. Here, we generalize the method to employ non-uniform meshes via a curvilinear map. The key enabling technology is a hybrid particle pusher [2], with particle positions updated in logical space and particle velocities updated in physical space. The self-adaptive, charge-conserving particle mover of Ref. [1] is extended to the non-uniform mesh case. The fully implicit implementation, using a Jacobian-free Newton-Krylov iterative solver, remains exactly charge- and energy-conserving. The extension of the formulation to multiple dimensions will be discussed. We present numerical experiments of 1D electrostatic, long-timescale ion-acoustic wave and ion-acoustic shock wave simulations, demonstrating that charge and energy are conserved to round-off for arbitrary mesh non-uniformity, and that the total momentum remains well conserved.[4pt] [1] Chen, Chac'on, Barnes, J. Comput. Phys. 230 (2011). [0pt] [2] Camporeale and Delzanno, Bull. Am. Phys. Soc. 56(6) (2011); Wang, et al., J. Plasma Physics, 61 (1999).

  13. Pentium Pro inside. 1; A treecode at 430 Gigaflops on ASCI Red

    NASA Technical Reports Server (NTRS)

    Warren, M. S.; Becker, D. J.; Sterling, T.; Salmon, J. K.; Goda, M. P.

    1997-01-01

    As an entry for the 1997 Gordon Bell performance prize, we present results from two methods of solving the gravitational N-body problem on the Intel Teraflops system at Sandia National Laboratory (ASCI Red). The first method, an O(N2) algorithm, obtained 635 Gigaflops for a 1 million particle problem on 6800 Pentium Pro processors. The second solution method, a tree-code which scales as O(N log N), sustained 170 Gigaflops over a continuous 9.4 hour period on 4096 processors, integrating the motion of 322 million mutually interacting particles in a cosmology simulation, while saving over 100 Gigabytes of raw data. Additionally, the tree-code sustained 430 Gigaflops on 6800 processors for the first 5 time-steps of that simulation. This tree-code solution is approximately 105 times more efficient than the O(N2) algorithm for this problem. As an entry for the 1997 Gordon Bell price/performance prize, we present two calculations from the disciplines of astrophysics and fluid dynamics. The simulations were performed on two 16 Pentium Pro processor Beowulf-class computers (Loki and Hyglac) constructed entirely from commodity personal computer technology, at a cost of roughly $50k each in September, 1996. The price of an equivalent system in August 1997 is less than $30. At Los Alamos, Loki performed a gravitational tree-code N-body simulation of galaxy formation using 9.75 million particles, which sustained an average of 879 Mflops over a ten day period, and produced roughly 10 Gbytes of raw data.

  14. Faster and more accurate transport procedures for HZETRN

    NASA Astrophysics Data System (ADS)

    Slaba, T. C.; Blattnig, S. R.; Badavi, F. F.

    2010-12-01

    The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle ( A ⩽ 4) and heavy ion ( A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete description of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm 2 in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm 2 of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.

  15. Faster and more accurate transport procedures for HZETRN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaba, T.C., E-mail: Tony.C.Slaba@nasa.go; Blattnig, S.R., E-mail: Steve.R.Blattnig@nasa.go; Badavi, F.F., E-mail: Francis.F.Badavi@nasa.go

    The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle (A {<=} 4) and heavy ion (A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete descriptionmore » of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm{sup 2} in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm{sup 2} of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.« less

  16. Coarse-grained molecular dynamics simulations for giant protein-DNA complexes

    NASA Astrophysics Data System (ADS)

    Takada, Shoji

    Biomolecules are highly hierarchic and intrinsically flexible. Thus, computational modeling calls for multi-scale methodologies. We have been developing a coarse-grained biomolecular model where on-average 10-20 atoms are grouped into one coarse-grained (CG) particle. Interactions among CG particles are tuned based on atomistic interactions and the fluctuation matching algorithm. CG molecular dynamics methods enable us to simulate much longer time scale motions of much larger molecular systems than fully atomistic models. After broad sampling of structures with CG models, we can easily reconstruct atomistic models, from which one can continue conventional molecular dynamics simulations if desired. Here, we describe our CG modeling methodology for protein-DNA complexes, together with various biological applications, such as the DNA duplication initiation complex, model chromatins, and transcription factor dynamics on chromatin-like environment.

  17. Coupled particle-in-cell and Monte Carlo transport modeling of intense radiographic sources

    NASA Astrophysics Data System (ADS)

    Rose, D. V.; Welch, D. R.; Oliver, B. V.; Clark, R. E.; Johnson, D. L.; Maenchen, J. E.; Menge, P. R.; Olson, C. L.; Rovang, D. C.

    2002-03-01

    Dose-rate calculations for intense electron-beam diodes using particle-in-cell (PIC) simulations along with Monte Carlo electron/photon transport calculations are presented. The electromagnetic PIC simulations are used to model the dynamic operation of the rod-pinch and immersed-B diodes. These simulations include algorithms for tracking electron scattering and energy loss in dense materials. The positions and momenta of photons created in these materials are recorded and separate Monte Carlo calculations are used to transport the photons to determine the dose in far-field detectors. These combined calculations are used to determine radiographer equations (dose scaling as a function of diode current and voltage) that are compared directly with measured dose rates obtained on the SABRE generator at Sandia National Laboratories.

  18. An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.; Barnes, D. C.

    2011-08-01

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov-Poisson formulation), ours is based on a nonlinearly converged Vlasov-Ampére (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant-Friedrichs-Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicit time steps (unlike the earlier "energy-conserving" explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton-Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.

  19. Wavelet investigation of preferential concentration in particle-laden turbulence

    NASA Astrophysics Data System (ADS)

    Bassenne, Maxime; Urzay, Javier; Schneider, Kai; Moin, Parviz

    2017-11-01

    Direct numerical simulations of particle-laden homogeneous-isotropic turbulence are employed in conjunction with wavelet multi-resolution analyses to study preferential concentration in both physical and spectral spaces. Spatially-localized energy spectra for velocity, vorticity and particle-number density are computed, along with their spatial fluctuations that enable the quantification of scale-dependent probability density functions, intermittency and inter-phase conditional statistics. The main result is that particles are found in regions of lower turbulence spectral energy than the corresponding mean. This suggests that modeling the subgrid-scale turbulence intermittency is required for capturing the small-scale statistics of preferential concentration in large-eddy simulations. Additionally, a method is defined that decomposes a particle number-density field into the sum of a coherent and an incoherent components. The coherent component representing the clusters can be sparsely described by at most 1.6% of the total number of wavelet coefficients. An application of the method, motivated by radiative-heat-transfer simulations, is illustrated in the form of a grid-adaptation algorithm that results in non-uniform meshes refined around particle clusters. It leads to a reduction of the number of control volumes by one to two orders of magnitude. PSAAP-II Center at Stanford (Grant DE-NA0002373).

  20. A fully-implicit Particle-In-Cell Monte Carlo Collision code for the simulation of inductively coupled plasmas

    NASA Astrophysics Data System (ADS)

    Mattei, S.; Nishida, K.; Onai, M.; Lettry, J.; Tran, M. Q.; Hatayama, A.

    2017-12-01

    We present a fully-implicit electromagnetic Particle-In-Cell Monte Carlo collision code, called NINJA, written for the simulation of inductively coupled plasmas. NINJA employs a kinetic enslaved Jacobian-Free Newton Krylov method to solve self-consistently the interaction between the electromagnetic field generated by the radio-frequency coil and the plasma response. The simulated plasma includes a kinetic description of charged and neutral species as well as the collision processes between them. The algorithm allows simulations with cell sizes much larger than the Debye length and time steps in excess of the Courant-Friedrichs-Lewy condition whilst preserving the conservation of the total energy. The code is applied to the simulation of the plasma discharge of the Linac4 H- ion source at CERN. Simulation results of plasma density, temperature and EEDF are discussed and compared with optical emission spectroscopy measurements. A systematic study of the energy conservation as a function of the numerical parameters is presented.

  1. Lunar Regolith Characterization for Simulant Design and Evaluation using Figure of Merit Algorithms

    NASA Technical Reports Server (NTRS)

    Schrader, Christian M.; Rickman, Douglas L.; Melemore, Carole A.; Fikes, John C.; Stoeser, Douglas B.; Wentworth, Susan J.; McKay, David S.

    2009-01-01

    NASA's Marshall Space Flight Center (MSFC), in conjunction with the United States Geological Survey (USGS) and aided by personnel from the Astromaterials Research and Exploration Science group at Johnson Space Center (ARES-JSC), is implementing a new data acquisition strategy to support the development and evaluation of lunar regolith simulants. The first analyses of lunar regolith samples by the simulant group were carried out in early 2008 on samples from Apollo 16 core 64001/64002. The results of these analyses are combined with data compiled from the literature to generate a reference composition and particle size distribution (PSD)) for lunar highlands regolith. In this paper we present the specifics of particle type composition and PSD for this reference composition. Furthermore. we use Figure-of-Merit (FoM) routines to measure the characteristics of a number of lunar regolith simulants against this reference composition. The lunar highlands regolith reference composition and the FoM results are presented to guide simulant producers and simulant users in their research and development processes.

  2. INTEGRATION OF PARTICLE-GAS SYSTEMS WITH STIFF MUTUAL DRAG INTERACTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Chao-Chin; Johansen, Anders, E-mail: ccyang@astro.lu.se, E-mail: anders@astro.lu.se

    2016-06-01

    Numerical simulation of numerous mm/cm-sized particles embedded in a gaseous disk has become an important tool in the study of planet formation and in understanding the dust distribution in observed protoplanetary disks. However, the mutual drag force between the gas and the particles can become so stiff—particularly because of small particles and/or strong local solid concentration—that an explicit integration of this system is computationally formidable. In this work, we consider the integration of the mutual drag force in a system of Eulerian gas and Lagrangian solid particles. Despite the entanglement between the gas and the particles under the particle-mesh construct,more » we are able to devise a numerical algorithm that effectively decomposes the globally coupled system of equations for the mutual drag force, and makes it possible to integrate this system on a cell-by-cell basis, which considerably reduces the computational task required. We use an analytical solution for the temporal evolution of each cell to relieve the time-step constraint posed by the mutual drag force, as well as to achieve the highest degree of accuracy. To validate our algorithm, we use an extensive suite of benchmarks with known solutions in one, two, and three dimensions, including the linear growth and the nonlinear saturation of the streaming instability. We demonstrate numerical convergence and satisfactory consistency in all cases. Our algorithm can, for example, be applied to model the evolution of the streaming instability with mm/cm-sized pebbles at high mass loading, which has important consequences for the formation scenarios of planetesimals.« less

  3. The Development and Comparison of Molecular Dynamics Simulation and Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Chen, Jundong

    2018-03-01

    Molecular dynamics is an integrated technology that combines physics, mathematics and chemistry. Molecular dynamics method is a computer simulation experimental method, which is a powerful tool for studying condensed matter system. This technique not only can get the trajectory of the atom, but can also observe the microscopic details of the atomic motion. By studying the numerical integration algorithm in molecular dynamics simulation, we can not only analyze the microstructure, the motion of particles and the image of macroscopic relationship between them and the material, but can also study the relationship between the interaction and the macroscopic properties more conveniently. The Monte Carlo Simulation, similar to the molecular dynamics, is a tool for studying the micro-molecular and particle nature. In this paper, the theoretical background of computer numerical simulation is introduced, and the specific methods of numerical integration are summarized, including Verlet method, Leap-frog method and Velocity Verlet method. At the same time, the method and principle of Monte Carlo Simulation are introduced. Finally, similarities and differences of Monte Carlo Simulation and the molecular dynamics simulation are discussed.

  4. GPU-accelerated computing for Lagrangian coherent structures of multi-body gravitational regimes

    NASA Astrophysics Data System (ADS)

    Lin, Mingpei; Xu, Ming; Fu, Xiaoyu

    2017-04-01

    Based on a well-established theoretical foundation, Lagrangian Coherent Structures (LCSs) have elicited widespread research on the intrinsic structures of dynamical systems in many fields, including the field of astrodynamics. Although the application of LCSs in dynamical problems seems straightforward theoretically, its associated computational cost is prohibitive. We propose a block decomposition algorithm developed on Compute Unified Device Architecture (CUDA) platform for the computation of the LCSs of multi-body gravitational regimes. In order to take advantage of GPU's outstanding computing properties, such as Shared Memory, Constant Memory, and Zero-Copy, the algorithm utilizes a block decomposition strategy to facilitate computation of finite-time Lyapunov exponent (FTLE) fields of arbitrary size and timespan. Simulation results demonstrate that this GPU-based algorithm can satisfy double-precision accuracy requirements and greatly decrease the time needed to calculate final results, increasing speed by approximately 13 times. Additionally, this algorithm can be generalized to various large-scale computing problems, such as particle filters, constellation design, and Monte-Carlo simulation.

  5. Brownian aggregation rate of colloid particles with several active sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nekrasov, Vyacheslav M.; Yurkin, Maxim A.; Chernyshev, Andrei V., E-mail: chern@ns.kinetics.nsc.ru

    2014-08-14

    We theoretically analyze the aggregation kinetics of colloid particles with several active sites. Such particles (so-called “patchy particles”) are well known as chemically anisotropic reactants, but the corresponding rate constant of their aggregation has not yet been established in a convenient analytical form. Using kinematic approximation for the diffusion problem, we derived an analytical formula for the diffusion-controlled reaction rate constant between two colloid particles (or clusters) with several small active sites under the following assumptions: the relative translational motion is Brownian diffusion, and the isotropic stochastic reorientation of each particle is Markovian and arbitrarily correlated. This formula was shownmore » to produce accurate results in comparison with more sophisticated approaches. Also, to account for the case of a low number of active sites per particle we used Monte Carlo stochastic algorithm based on Gillespie method. Simulations showed that such discrete model is required when this number is less than 10. Finally, we applied the developed approach to the simulation of immunoagglutination, assuming that the formed clusters have fractal structure.« less

  6. Lagrangian simulation of mixing and reactions in complex geochemical systems

    NASA Astrophysics Data System (ADS)

    Engdahl, Nicholas B.; Benson, David A.; Bolster, Diogo

    2017-04-01

    Simulations of detailed geochemical systems have traditionally been restricted to Eulerian reactive transport algorithms. This note introduces a Lagrangian method for modeling multicomponent reaction systems. The approach uses standard random walk-based methods for the particle motion steps but allows the particles to interact with each other by exchanging mass of their various chemical species. The colocation density of each particle pair is used to calculate the mass transfer rate, which creates a local disequilibrium that is then relaxed back toward equilibrium using the reaction engine PhreeqcRM. The mass exchange is the only step where the particles interact and the remaining transport and reaction steps are entirely independent for each particle. Several validation examples are presented, which reproduce well-known analytical solutions. These are followed by two demonstration examples of a competitive decay chain and an acid-mine drainage system. The source code, entitled Complex Reaction on Particles (CRP), and files needed to run these examples are hosted openly on GitHub (https://github.com/nbengdahl/CRP), so as to enable interested readers to readily apply this approach with minimal modifications.

  7. A cellular automaton method to simulate the microstructure and evolution of low-enriched uranium (LEU) U–Mo/Al dispersion type fuel plates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drera, Saleem S.; Hofman, Gerard L.; Kee, Robert J.

    Low-enriched uranium (LEU) fuel plates for high power materials test reactors (MTR) are composed of nominally spherical uranium-molybdenum (U-Mo) particles within an aluminum matrix. Fresh U-Mo particles typically range between 10 and 100 mu m in diameter, with particle volume fractions up to 50%. As the fuel ages, reaction-diffusion processes cause the formation and growth of interaction layers that surround the fuel particles. The growth rate depends upon the temperature and radiation environment. The cellular automaton algorithm described in this paper can synthesize realistic random fuel-particle structures and simulate the growth of the intermetallic interaction layers. Examples in the presentmore » paper pack approximately 1000 particles into three-dimensional rectangular fuel structures that are approximately 1 mm on each side. The computational approach is designed to yield synthetic microstructures consistent with images from actual fuel plates and is validated by comparison with empirical data on actual fuel plates. (C) 2014 Elsevier B.V. All rights reserved.« less

  8. A resilient and efficient CFD framework: Statistical learning tools for multi-fidelity and heterogeneous information fusion

    NASA Astrophysics Data System (ADS)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-09-01

    Exascale-level simulations require fault-resilient algorithms that are robust against repeated and expected software and/or hardware failures during computations, which may render the simulation results unsatisfactory. If each processor can share some global information about the simulation from a coarse, limited accuracy but relatively costless auxiliary simulator we can effectively fill-in the missing spatial data at the required times by a statistical learning technique - multi-level Gaussian process regression, on the fly; this has been demonstrated in previous work [1]. Based on the previous work, we also employ another (nonlinear) statistical learning technique, Diffusion Maps, that detects computational redundancy in time and hence accelerate the simulation by projective time integration, giving the overall computation a "patch dynamics" flavor. Furthermore, we are now able to perform information fusion with multi-fidelity and heterogeneous data (including stochastic data). Finally, we set the foundations of a new framework in CFD, called patch simulation, that combines information fusion techniques from, in principle, multiple fidelity and resolution simulations (and even experiments) with a new adaptive timestep refinement technique. We present two benchmark problems (the heat equation and the Navier-Stokes equations) to demonstrate the new capability that statistical learning tools can bring to traditional scientific computing algorithms. For each problem, we rely on heterogeneous and multi-fidelity data, either from a coarse simulation of the same equation or from a stochastic, particle-based, more "microscopic" simulation. We consider, as such "auxiliary" models, a Monte Carlo random walk for the heat equation and a dissipative particle dynamics (DPD) model for the Navier-Stokes equations. More broadly, in this paper we demonstrate the symbiotic and synergistic combination of statistical learning, domain decomposition, and scientific computing in exascale simulations.

  9. Tidal Turbine Array Optimization Based on the Discrete Particle Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Guo-wei; Wu, He; Wang, Xiao-yong; Zhou, Qing-wei; Liu, Xiao-man

    2018-06-01

    In consideration of the resource wasted by unreasonable layout scheme of tidal current turbines, which would influence the ratio of cost and power output, particle swarm optimization algorithm is introduced and improved in the paper. In order to solve the problem of optimal array of tidal turbines, the discrete particle swarm optimization (DPSO) algorithm has been performed by re-defining the updating strategies of particles' velocity and position. This paper analyzes the optimization problem of micrositing of tidal current turbines by adjusting each turbine's position, where the maximum value of total electric power is obtained at the maximum speed in the flood tide and ebb tide. Firstly, the best installed turbine number is generated by maximizing the output energy in the given tidal farm by the Farm/Flux and empirical method. Secondly, considering the wake effect, the reasonable distance between turbines, and the tidal velocities influencing factors in the tidal farm, Jensen wake model and elliptic distribution model are selected for the turbines' total generating capacity calculation at the maximum speed in the flood tide and ebb tide. Finally, the total generating capacity, regarded as objective function, is calculated in the final simulation, thus the DPSO could guide the individuals to the feasible area and optimal position. The results have been concluded that the optimization algorithm, which increased 6.19% more recourse output than experience method, can be thought as a good tool for engineering design of tidal energy demonstration.

  10. An energy- and charge-conserving, nonlinearly implicit, electromagnetic 1D-3V Vlasov-Darwin particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.

    2014-10-01

    A recent proof-of-principle study proposes a nonlinear electrostatic implicit particle-in-cell (PIC) algorithm in one dimension (Chen et al., 2011). The algorithm employs a kinetically enslaved Jacobian-free Newton-Krylov (JFNK) method, and conserves energy and charge to numerical round-off. In this study, we generalize the method to electromagnetic simulations in 1D using the Darwin approximation to Maxwell's equations, which avoids radiative noise issues by ordering out the light wave. An implicit, orbit-averaged, time-space-centered finite difference scheme is employed in both the 1D Darwin field equations (in potential form) and the 1D-3V particle orbit equations to produce a discrete system that remains exactly charge- and energy-conserving. Furthermore, enabled by the implicit Darwin equations, exact conservation of the canonical momentum per particle in any ignorable direction is enforced via a suitable scattering rule for the magnetic field. We have developed a simple preconditioner that targets electrostatic waves and skin currents, and allows us to employ time steps O(√{mi /me } c /veT) larger than the explicit CFL. Several 1D numerical experiments demonstrate the accuracy, performance, and conservation properties of the algorithm. In particular, the scheme is shown to be second-order accurate, and CPU speedups of more than three orders of magnitude vs. an explicit Vlasov-Maxwell solver are demonstrated in the "cold" plasma regime (where kλD ≪ 1).

  11. How to model supernovae in simulations of star and galaxy formation

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.; Wetzel, Andrew; Kereš, Dušan; Faucher-Giguère, Claude-André; Quataert, Eliot; Boylan-Kolchin, Michael; Murray, Norman; Hayward, Christopher C.; El-Badry, Kareem

    2018-06-01

    We study the implementation of mechanical feedback from supernovae (SNe) and stellar mass loss in galaxy simulations, within the Feedback In Realistic Environments (FIRE) project. We present the FIRE-2 algorithm for coupling mechanical feedback, which can be applied to any hydrodynamics method (e.g. fixed-grid, moving-mesh, and mesh-less methods), and black hole as well as stellar feedback. This algorithm ensures manifest conservation of mass, energy, and momentum, and avoids imprinting `preferred directions' on the ejecta. We show that it is critical to incorporate both momentum and thermal energy of mechanical ejecta in a self-consistent manner, accounting for SNe cooling radii when they are not resolved. Using idealized simulations of single SN explosions, we show that the FIRE-2 algorithm, independent of resolution, reproduces converged solutions in both energy and momentum. In contrast, common `fully thermal' (energy-dump) or `fully kinetic' (particle-kicking) schemes in the literature depend strongly on resolution: when applied at mass resolution ≳100 M⊙, they diverge by orders of magnitude from the converged solution. In galaxy-formation simulations, this divergence leads to orders-of-magnitude differences in galaxy properties, unless those models are adjusted in a resolution-dependent way. We show that all models that individually time-resolve SNe converge to the FIRE-2 solution at sufficiently high resolution (<100 M⊙). However, in both idealized single-SN simulations and cosmological galaxy-formation simulations, the FIRE-2 algorithm converges much faster than other sub-grid models without re-tuning parameters.

  12. Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.

    NASA Astrophysics Data System (ADS)

    Elliott, William Dewey

    1995-01-01

    A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over several simulation timesteps. One MD application described here highlights the utility of including long range contributions to Lennard-Jones potential in constant pressure simulations. Another application shows the time dependence of long range forces in a multiple time step MD simulation.

  13. Biomimicry of symbiotic multi-species coevolution for discrete and continuous optimization in RFID networks.

    PubMed

    Lin, Na; Chen, Hanning; Jing, Shikai; Liu, Fang; Liang, Xiaodan

    2017-03-01

    In recent years, symbiosis as a rich source of potential engineering applications and computational model has attracted more and more attentions in the adaptive complex systems and evolution computing domains. Inspired by different symbiotic coevolution forms in nature, this paper proposed a series of multi-swarm particle swarm optimizers called PS 2 Os, which extend the single population particle swarm optimization (PSO) algorithm to interacting multi-swarms model by constructing hierarchical interaction topologies and enhanced dynamical update equations. According to different symbiotic interrelationships, four versions of PS 2 O are initiated to mimic mutualism, commensalism, predation, and competition mechanism, respectively. In the experiments, with five benchmark problems, the proposed algorithms are proved to have considerable potential for solving complex optimization problems. The coevolutionary dynamics of symbiotic species in each PS 2 O version are also studied respectively to demonstrate the heterogeneity of different symbiotic interrelationships that effect on the algorithm's performance. Then PS 2 O is used for solving the radio frequency identification (RFID) network planning (RNP) problem with a mixture of discrete and continuous variables. Simulation results show that the proposed algorithm outperforms the reference algorithms for planning RFID networks, in terms of optimization accuracy and computation robustness.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anthony, Stephen

    The Sandia hyperspectral upper-bound spectrum algorithm (hyper-UBS) is a cosmic ray despiking algorithm for hyperspectral data sets. When naturally-occurring, high-energy (gigaelectronvolt) cosmic rays impact the earth’s atmosphere, they create an avalanche of secondary particles which will register as a large, positive spike on any spectroscopic detector they hit. Cosmic ray spikes are therefore an unavoidable spectroscopic contaminant which can interfere with subsequent analysis. A variety of cosmic ray despiking algorithms already exist and can potentially be applied to hyperspectral data matrices, most notably the upper-bound spectrum data matrices (UBS-DM) algorithm by Dongmao Zhang and Dor Ben-Amotz which served as themore » basis for the hyper-UBS algorithm. However, the existing algorithms either cannot be applied to hyperspectral data, require information that is not always available, introduce undesired spectral bias, or have otherwise limited effectiveness for some experimentally relevant conditions. Hyper-UBS is more effective at removing a wider variety of cosmic ray spikes from hyperspectral data without introducing undesired spectral bias. In addition to the core algorithm the Sandia hyper-UBS software package includes additional source code useful in evaluating the effectiveness of the hyper-UBS algorithm. The accompanying source code includes code to generate simulated hyperspectral data contaminated by cosmic ray spikes, several existing despiking algorithms, and code to evaluate the performance of the despiking algorithms on simulated data.« less

  15. ZENO: N-body and SPH Simulation Codes

    NASA Astrophysics Data System (ADS)

    Barnes, Joshua E.

    2011-02-01

    The ZENO software package integrates N-body and SPH simulation codes with a large array of programs to generate initial conditions and analyze numerical simulations. Written in C, the ZENO system is portable between Mac, Linux, and Unix platforms. It is in active use at the Institute for Astronomy (IfA), at NRAO, and possibly elsewhere. Zeno programs can perform a wide range of simulation and analysis tasks. While many of these programs were first created for specific projects, they embody algorithms of general applicability and embrace a modular design strategy, so existing code is easily applied to new tasks. Major elements of the system include: Structured data file utilities facilitate basic operations on binary data, including import/export of ZENO data to other systems.Snapshot generation routines create particle distributions with various properties. Systems with user-specified density profiles can be realized in collisionless or gaseous form; multiple spherical and disk components may be set up in mutual equilibrium.Snapshot manipulation routines permit the user to sift, sort, and combine particle arrays, translate and rotate particle configurations, and assign new values to data fields associated with each particle.Simulation codes include both pure N-body and combined N-body/SPH programs: Pure N-body codes are available in both uniprocessor and parallel versions.SPH codes offer a wide range of options for gas physics, including isothermal, adiabatic, and radiating models. Snapshot analysis programs calculate temporal averages, evaluate particle statistics, measure shapes and density profiles, compute kinematic properties, and identify and track objects in particle distributions.Visualization programs generate interactive displays and produce still images and videos of particle distributions; the user may specify arbitrary color schemes and viewing transformations.

  16. A particle filter for multi-target tracking in track before detect context

    NASA Astrophysics Data System (ADS)

    Amrouche, Naima; Khenchaf, Ali; Berkani, Daoud

    2016-10-01

    The track-before-detect (TBD) approach can be used to track a single target in a highly noisy radar scene. This is because it makes use of unthresholded observations and incorporates a binary target existence variable into its target state estimation process when implemented as a particle filter (PF). This paper proposes the recursive PF-TBD approach to detect multiple targets in low-signal-to noise ratios (SNR). The algorithm's successful performance is demonstrated using a simulated two target example.

  17. Free Mesh Method: fundamental conception, algorithms and accuracy study

    PubMed Central

    YAGAWA, Genki

    2011-01-01

    The finite element method (FEM) has been commonly employed in a variety of fields as a computer simulation method to solve such problems as solid, fluid, electro-magnetic phenomena and so on. However, creation of a quality mesh for the problem domain is a prerequisite when using FEM, which becomes a major part of the cost of a simulation. It is natural that the concept of meshless method has evolved. The free mesh method (FMM) is among the typical meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, especially on parallel processors. FMM is an efficient node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm for the finite element calculations. In this paper, FMM and its variation are reviewed focusing on their fundamental conception, algorithms and accuracy. PMID:21558752

  18. Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes

    NASA Astrophysics Data System (ADS)

    Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; Stuehn, Torsten

    2017-11-01

    Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach, the theoretical modeling and scaling laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. These two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.

  19. Development of Multistep and Degenerate Variational Integrators for Applications in Plasma Physics

    NASA Astrophysics Data System (ADS)

    Ellison, Charles Leland

    Geometric integrators yield high-fidelity numerical results by retaining conservation laws in the time advance. A particularly powerful class of geometric integrators is symplectic integrators, which are widely used in orbital mechanics and accelerator physics. An important application presently lacking symplectic integrators is the guiding center motion of magnetized particles represented by non-canonical coordinates. Because guiding center trajectories are foundational to many simulations of magnetically confined plasmas, geometric guiding center algorithms have high potential for impact. The motivation is compounded by the need to simulate long-pulse fusion devices, including ITER, and opportunities in high performance computing, including the use of petascale resources and beyond. This dissertation uses a systematic procedure for constructing geometric integrators --- known as variational integration --- to deliver new algorithms for guiding center trajectories and other plasma-relevant dynamical systems. These variational integrators are non-trivial because the Lagrangians of interest are degenerate - the Euler-Lagrange equations are first-order differential equations and the Legendre transform is not invertible. The first contribution of this dissertation is that variational integrators for degenerate Lagrangian systems are typically multistep methods. Multistep methods admit parasitic mode instabilities that can ruin the numerical results. These instabilities motivate the second major contribution: degenerate variational integrators. By replicating the degeneracy of the continuous system, degenerate variational integrators avoid parasitic mode instabilities. The new methods are therefore robust geometric integrators for degenerate Lagrangian systems. These developments in variational integration theory culminate in one-step degenerate variational integrators for non-canonical magnetic field line flow and guiding center dynamics. The guiding center integrator assumes coordinates such that one component of the magnetic field is zero; it is shown how to construct such coordinates for nested magnetic surface configurations. Additionally, collisional drag effects are incorporated in the variational guiding center algorithm for the first time, allowing simulation of energetic particle thermalization. Advantages relative to existing canonical-symplectic and non-geometric algorithms are numerically demonstrated. All algorithms have been implemented as part of a modern, parallel, ODE-solving library, suitable for use in high-performance simulations.

  20. Rapid sampling of stochastic displacements in Brownian dynamics simulations with stresslet constraints

    NASA Astrophysics Data System (ADS)

    Fiore, Andrew M.; Swan, James W.

    2018-01-01

    Brownian Dynamics simulations are an important tool for modeling the dynamics of soft matter. However, accurate and rapid computations of the hydrodynamic interactions between suspended, microscopic components in a soft material are a significant computational challenge. Here, we present a new method for Brownian dynamics simulations of suspended colloidal scale particles such as colloids, polymers, surfactants, and proteins subject to a particular and important class of hydrodynamic constraints. The total computational cost of the algorithm is practically linear with the number of particles modeled and can be further optimized when the characteristic mass fractal dimension of the suspended particles is known. Specifically, we consider the so-called "stresslet" constraint for which suspended particles resist local deformation. This acts to produce a symmetric force dipole in the fluid and imparts rigidity to the particles. The presented method is an extension of the recently reported positively split formulation for Ewald summation of the Rotne-Prager-Yamakawa mobility tensor to higher order terms in the hydrodynamic scattering series accounting for force dipoles [A. M. Fiore et al., J. Chem. Phys. 146(12), 124116 (2017)]. The hydrodynamic mobility tensor, which is proportional to the covariance of particle Brownian displacements, is constructed as an Ewald sum in a novel way which guarantees that the real-space and wave-space contributions to the sum are independently symmetric and positive-definite for all possible particle configurations. This property of the Ewald sum is leveraged to rapidly sample the Brownian displacements from a superposition of statistically independent processes with the wave-space and real-space contributions as respective covariances. The cost of computing the Brownian displacements in this way is comparable to the cost of computing the deterministic displacements. The addition of a stresslet constraint to the over-damped particle equations of motion leads to a stochastic differential algebraic equation (SDAE) of index 1, which is integrated forward in time using a mid-point integration scheme that implicitly produces stochastic displacements consistent with the fluctuation-dissipation theorem for the constrained system. Calculations for hard sphere dispersions are illustrated and used to explore the performance of the algorithm. An open source, high-performance implementation on graphics processing units capable of dynamic simulations of millions of particles and integrated with the software package HOOMD-blue is used for benchmarking and made freely available in the supplementary material (ftp://ftp.aip.org/epaps/journ_chem_phys/E-JCPSA6-148-012805)

  1. Angular dependence of multiangle dynamic light scattering for particle size distribution inversion using a self-adapting regularization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Lei; Yu, Long; Yang, Kecheng; Li, Wei; Li, Kai; Xia, Min

    2018-04-01

    The multiangle dynamic light scattering (MDLS) technique can better estimate particle size distributions (PSDs) than single-angle dynamic light scattering. However, determining the inversion range, angular weighting coefficients, and scattering angle combination is difficult but fundamental to the reconstruction for both unimodal and multimodal distributions. In this paper, we propose a self-adapting regularization method called the wavelet iterative recursion nonnegative Tikhonov-Phillips-Twomey (WIRNNT-PT) algorithm. This algorithm combines a wavelet multiscale strategy with an appropriate inversion method and could self-adaptively optimize several noteworthy issues containing the choices of the weighting coefficients, the inversion range and the optimal inversion method from two regularization algorithms for estimating the PSD from MDLS measurements. In addition, the angular dependence of the MDLS for estimating the PSDs of polymeric latexes is thoroughly analyzed. The dependence of the results on the number and range of measurement angles was analyzed in depth to identify the optimal scattering angle combination. Numerical simulations and experimental results for unimodal and multimodal distributions are presented to demonstrate both the validity of the WIRNNT-PT algorithm and the angular dependence of MDLS and show that the proposed algorithm with a six-angle analysis in the 30-130° range can be satisfactorily applied to retrieve PSDs from MDLS measurements.

  2. Using modified fruit fly optimisation algorithm to perform the function test and case studies

    NASA Astrophysics Data System (ADS)

    Pan, Wen-Tsao

    2013-06-01

    Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.

  3. Performance assessment of a pre-partitioned adaptive chemistry approach in large-eddy simulation of turbulent flames

    NASA Astrophysics Data System (ADS)

    Pepiot, Perrine; Liang, Youwen; Newale, Ashish; Pope, Stephen

    2016-11-01

    A pre-partitioned adaptive chemistry (PPAC) approach recently developed and validated in the simplified framework of a partially-stirred reactor is applied to the simulation of turbulent flames using a LES/particle PDF framework. The PPAC approach was shown to simultaneously provide significant savings in CPU and memory requirements, two major limiting factors in LES/particle PDF. The savings are achieved by providing each particle in the PDF method with a specialized reduced representation and kinetic model adjusted to its changing composition. Both representation and model are identified efficiently from a pre-determined list using a low-dimensional binary-tree search algorithm, thereby keeping the run-time overhead associated with the adaptive strategy to a minimum. The Sandia D flame is used as benchmark to quantify the performance of the PPAC algorithm in a turbulent combustion setting. In particular, the CPU and memory benefits, the distribution of the various representations throughout the computational domain, and the relationship between the user-defined error tolerances used to derive the reduced representations and models and the actual errors observed in LES/PDF are characterized. This material is based upon work supported by the U.S. Department of Energy Office of Science, Office of Basic Energy Sciences under Award Number DE-FG02-90ER14128.

  4. Automated detection and analysis of particle beams in laser-plasma accelerator simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ushizima, Daniela Mayumi; Geddes, C.G.; Cormier-Michel, E.

    Numerical simulations of laser-plasma wakefield (particle) accelerators model the acceleration of electrons trapped in plasma oscillations (wakes) left behind when an intense laser pulse propagates through the plasma. The goal of these simulations is to better understand the process involved in plasma wake generation and how electrons are trapped and accelerated by the wake. Understanding of such accelerators, and their development, offer high accelerating gradients, potentially reducing size and cost of new accelerators. One operating regime of interest is where a trapped subset of electrons loads the wake and forms an isolated group of accelerated particles with low spread inmore » momentum and position, desirable characteristics for many applications. The electrons trapped in the wake may be accelerated to high energies, the plasma gradient in the wake reaching up to a gigaelectronvolt per centimeter. High-energy electron accelerators power intense X-ray radiation to terahertz sources, and are used in many applications including medical radiotherapy and imaging. To extract information from the simulation about the quality of the beam, a typical approach is to examine plots of the entire dataset, visually determining the adequate parameters necessary to select a subset of particles, which is then further analyzed. This procedure requires laborious examination of massive data sets over many time steps using several plots, a routine that is unfeasible for large data collections. Demand for automated analysis is growing along with the volume and size of simulations. Current 2D LWFA simulation datasets are typically between 1GB and 100GB in size, but simulations in 3D are of the order of TBs. The increase in the number of datasets and dataset sizes leads to a need for automatic routines to recognize particle patterns as particle bunches (beam of electrons) for subsequent analysis. Because of the growth in dataset size, the application of machine learning techniques for scientific data mining is increasingly considered. In plasma simulations, Bagherjeiran et al. presented a comprehensive report on applying graph-based techniques for orbit classification. They used the KAM classifier to label points and components in single and multiple orbits. Love et al. conducted an image space analysis of coherent structures in plasma simulations. They used a number of segmentation and region-growing techniques to isolate regions of interest in orbit plots. Both approaches analyzed particle accelerator data, targeting the system dynamics in terms of particle orbits. However, they did not address particle dynamics as a function of time or inspected the behavior of bunches of particles. Ruebel et al. addressed the visual analysis of massive laser wakefield acceleration (LWFA) simulation data using interactive procedures to query the data. Sophisticated visualization tools were provided to inspect the data manually. Ruebel et al. have integrated these tools to the visualization and analysis system VisIt, in addition to utilizing efficient data management based on HDF5, H5Part, and the index/query tool FastBit. In Ruebel et al. proposed automatic beam path analysis using a suite of methods to classify particles in simulation data and to analyze their temporal evolution. To enable researchers to accurately define particle beams, the method computes a set of measures based on the path of particles relative to the distance of the particles to a beam. To achieve good performance, this framework uses an analysis pipeline designed to quickly reduce the amount of data that needs to be considered in the actual path distance computation. As part of this process, region-growing methods are utilized to detect particle bunches at single time steps. Efficient data reduction is essential to enable automated analysis of large data sets as described in the next section, where data reduction methods are steered to the particular requirements of our clustering analysis. Previously, we have described the application of a set of algorithms to automate the data analysis and classification of particle beams in the LWFA simulation data, identifying locations with high density of high energy particles. These algorithms detected high density locations (nodes) in each time step, i.e. maximum points on the particle distribution for only one spatial variable. Each node was correlated to a node in previous or later time steps by linking these nodes according to a pruned minimum spanning tree (PMST). We call the PMST representation 'a lifetime diagram', which is a graphical tool to show temporal information of high dense groups of particles in the longitudinal direction for the time series. Electron bunch compactness was described by another step of the processing, designed to partition each time step, using fuzzy clustering, into a fixed number of clusters.« less

  5. A parallel algorithm for the initial screening of space debris collisions prediction using the SGP4/SDP4 models and GPU acceleration

    NASA Astrophysics Data System (ADS)

    Lin, Mingpei; Xu, Ming; Fu, Xiaoyu

    2017-05-01

    Currently, a tremendous amount of space debris in Earth's orbit imperils operational spacecraft. It is essential to undertake risk assessments of collisions and predict dangerous encounters in space. However, collision predictions for an enormous amount of space debris give rise to large-scale computations. In this paper, a parallel algorithm is established on the Compute Unified Device Architecture (CUDA) platform of NVIDIA Corporation for collision prediction. According to the parallel structure of NVIDIA graphics processors, a block decomposition strategy is adopted in the algorithm. Space debris is divided into batches, and the computation and data transfer operations of adjacent batches overlap. As a consequence, the latency to access shared memory during the entire computing process is significantly reduced, and a higher computing speed is reached. Theoretically, a simulation of collision prediction for space debris of any amount and for any time span can be executed. To verify this algorithm, a simulation example including 1382 pieces of debris, whose operational time scales vary from 1 min to 3 days, is conducted on Tesla C2075 of NVIDIA. The simulation results demonstrate that with the same computational accuracy as that of a CPU, the computing speed of the parallel algorithm on a GPU is 30 times that on a CPU. Based on this algorithm, collision prediction of over 150 Chinese spacecraft for a time span of 3 days can be completed in less than 3 h on a single computer, which meets the timeliness requirement of the initial screening task. Furthermore, the algorithm can be adapted for multiple tasks, including particle filtration, constellation design, and Monte-Carlo simulation of an orbital computation.

  6. Lagrangian Particle Tracking in a Discontinuous Galerkin Method for Hypersonic Reentry Flows in Dusty Environments

    NASA Astrophysics Data System (ADS)

    Ching, Eric; Lv, Yu; Ihme, Matthias

    2017-11-01

    Recent interest in human-scale missions to Mars has sparked active research into high-fidelity simulations of reentry flows. A key feature of the Mars atmosphere is the high levels of suspended dust particles, which can not only enhance erosion of thermal protection systems but also transfer energy and momentum to the shock layer, increasing surface heat fluxes. Second-order finite-volume schemes are typically employed for hypersonic flow simulations, but such schemes suffer from a number of limitations. An attractive alternative is discontinuous Galerkin methods, which benefit from arbitrarily high spatial order of accuracy, geometric flexibility, and other advantages. As such, a Lagrangian particle method is developed in a discontinuous Galerkin framework to enable the computation of particle-laden hypersonic flows. Two-way coupling between the carrier and disperse phases is considered, and an efficient particle search algorithm compatible with unstructured curved meshes is proposed. In addition, variable thermodynamic properties are considered to accommodate high-temperature gases. The performance of the particle method is demonstrated in several test cases, with focus on the accurate prediction of particle trajectories and heating augmentation. Financial support from a Stanford Graduate Fellowship and the NASA Early Career Faculty program are gratefully acknowledged.

  7. Smoldyn on graphics processing units: massively parallel Brownian dynamics simulations.

    PubMed

    Dematté, Lorenzo

    2012-01-01

    Space is a very important aspect in the simulation of biochemical systems; recently, the need for simulation algorithms able to cope with space is becoming more and more compelling. Complex and detailed models of biochemical systems need to deal with the movement of single molecules and particles, taking into consideration localized fluctuations, transportation phenomena, and diffusion. A common drawback of spatial models lies in their complexity: models can become very large, and their simulation could be time consuming, especially if we want to capture the systems behavior in a reliable way using stochastic methods in conjunction with a high spatial resolution. In order to deliver the promise done by systems biology to be able to understand a system as whole, we need to scale up the size of models we are able to simulate, moving from sequential to parallel simulation algorithms. In this paper, we analyze Smoldyn, a widely diffused algorithm for stochastic simulation of chemical reactions with spatial resolution and single molecule detail, and we propose an alternative, innovative implementation that exploits the parallelism of Graphics Processing Units (GPUs). The implementation executes the most computational demanding steps (computation of diffusion, unimolecular, and bimolecular reaction, as well as the most common cases of molecule-surface interaction) on the GPU, computing them in parallel on each molecule of the system. The implementation offers good speed-ups and real time, high quality graphics output

  8. A Meshless Method for Magnetohydrodynamics and Applications to Protoplanetary Disks

    NASA Astrophysics Data System (ADS)

    McNally, Colin P.

    2012-08-01

    This thesis presents an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. The code has been parallelized by adapting the framework provided by Gadget-2. A set of standard test problems, including one part in a million amplitude linear MHD waves, magnetized shock tubes, and Kelvin-Helmholtz instabilities are presented. Finally we demonstrate good agreement with analytic predictions of linear growth rates for magnetorotational instability in a cylindrical geometry. We provide a rigorous methodology for verifying a numerical method on two dimensional Kelvin-Helmholtz instability. The test problem was run in the Pencil Code, Athena, Enzo, NDSPHMHD, and Phurbas. A strict comparison, judgment, or ranking, between codes is beyond the scope of this work, although this work provides the mathematical framewor! k needed for such a study. Nonetheless, how the test is posed circumvents the issues raised by tests starting from a sharp contact discontinuity yet it still shows the poor performance of Smoothed Particle Hydrodynamics. We then comment on the connection between this behavior and the underlying lack of zeroth-order consistency in Smoothed Particle Hydrodynamics interpolation. In astrophysical magnetohydrodynamics (MHD) and electrodynamics simulations, numerically enforcing the divergence free constraint on the magnetic field has been difficult. We observe that for point-based discretization, as used in finite-difference type and pseudo-spectral methods, the divergence free constraint can be satisfied entirely by a choice of interpolation used to define the derivatives of the magnetic field. As an example we demonstrate a new class of finite-difference type derivative operators on a regular grid which has the divergence free property. This principle clarifies the nature of magnetic monopole errors. The principles and techniques demonstrated in this chapter are particularly useful for the magnetic field, but can be applied to any vector field. Finally, we examine global zoom-in simulations of turbulent magnetorotationally unstable flow. We extract and analyze the high-current regions produced in the turbulent flow. Basic parameters of these regions are abstracted, and we build one dimensional models including non-ideal MHD, and radiative transfer. For sufficiently high temperatures, an instability resulting from the temperature dependence of the Ohmic resistivity is found. This instability concentrates current sheets, resulting in the possibility of rapid heating from temperatures on the order of 600 Kelvin to 2000 Kelvin in magnetorotationally turbulent regions of protoplanetary disks. This is a possible local mechanism for the melting of chondrules and the formation of other high-temperature materials in protoplanetary disks.

  9. Efficient volumetric estimation from plenoptic data

    NASA Astrophysics Data System (ADS)

    Anglin, Paul; Reeves, Stanley J.; Thurow, Brian S.

    2013-03-01

    The commercial release of the Lytro camera, and greater availability of plenoptic imaging systems in general, have given the image processing community cost-effective tools for light-field imaging. While this data is most commonly used to generate planar images at arbitrary focal depths, reconstruction of volumetric fields is also possible. Similarly, deconvolution is a technique that is conventionally used in planar image reconstruction, or deblurring, algorithms. However, when leveraged with the ability of a light-field camera to quickly reproduce multiple focal planes within an imaged volume, deconvolution offers a computationally efficient method of volumetric reconstruction. Related research has shown than light-field imaging systems in conjunction with tomographic reconstruction techniques are also capable of estimating the imaged volume and have been successfully applied to particle image velocimetry (PIV). However, while tomographic volumetric estimation through algorithms such as multiplicative algebraic reconstruction techniques (MART) have proven to be highly accurate, they are computationally intensive. In this paper, the reconstruction problem is shown to be solvable by deconvolution. Deconvolution offers significant improvement in computational efficiency through the use of fast Fourier transforms (FFTs) when compared to other tomographic methods. This work describes a deconvolution algorithm designed to reconstruct a 3-D particle field from simulated plenoptic data. A 3-D extension of existing 2-D FFT-based refocusing techniques is presented to further improve efficiency when computing object focal stacks and system point spread functions (PSF). Reconstruction artifacts are identified; their underlying source and methods of mitigation are explored where possible, and reconstructions of simulated particle fields are provided.

  10. Incompressible SPH (ISPH) with fast Poisson solver on a GPU

    NASA Astrophysics Data System (ADS)

    Chow, Alex D.; Rogers, Benedict D.; Lind, Steven J.; Stansby, Peter K.

    2018-05-01

    This paper presents a fast incompressible SPH (ISPH) solver implemented to run entirely on a graphics processing unit (GPU) capable of simulating several millions of particles in three dimensions on a single GPU. The ISPH algorithm is implemented by converting the highly optimised open-source weakly-compressible SPH (WCSPH) code DualSPHysics to run ISPH on the GPU, combining it with the open-source linear algebra library ViennaCL for fast solutions of the pressure Poisson equation (PPE). Several challenges are addressed with this research: constructing a PPE matrix every timestep on the GPU for moving particles, optimising the limited GPU memory, and exploiting fast matrix solvers. The ISPH pressure projection algorithm is implemented as 4 separate stages, each with a particle sweep, including an algorithm for the population of the PPE matrix suitable for the GPU, and mixed precision storage methods. An accurate and robust ISPH boundary condition ideal for parallel processing is also established by adapting an existing WCSPH boundary condition for ISPH. A variety of validation cases are presented: an impulsively started plate, incompressible flow around a moving square in a box, and dambreaks (2-D and 3-D) which demonstrate the accuracy, flexibility, and speed of the methodology. Fragmentation of the free surface is shown to influence the performance of matrix preconditioners and therefore the PPE matrix solution time. The Jacobi preconditioner demonstrates robustness and reliability in the presence of fragmented flows. For a dambreak simulation, GPU speed ups demonstrate up to 10-18 times and 1.1-4.5 times compared to single-threaded and 16-threaded CPU run times respectively.

  11. Particle-flow reconstruction and global event description with the CMS detector

    DOE PAGES

    Sirunyan, A. M.; Tumasyan, A.; Adam, W.; ...

    2017-10-06

    The CMS apparatus was identified, a few years before the start of the LHC operation at CERN, to feature properties well suited to particle-flow (PF) reconstruction: a highly-segmented tracker, a fine-grained electromagnetic calorimeter, a hermetic hadron calorimeter, a strong magnetic field, and an excellent muon spectrometer. A fully-fledged PF reconstruction algorithm tuned to the CMS detector was therefore developed and has been consistently used in physics analyses for the first time at a hadron collider. For each collision, the comprehensive list of final-state particles identified and reconstructed by the algorithm provides a global event description that leads to unprecedented CMSmore » performance for jet and hadronic tau decay reconstruction, missing transverse momentum determination, and electron and muon identification. This approach also allows particles from pileup interactions to be identified and enables efficient pileup mitigation methods. In conclusion, the data collected by CMS at a centre-of-mass energy of 8 TeV show excellent agreement with the simulation and confirm the superior PF performance at least up to an average of 20 pileup interactions.« less

  12. Particle-flow reconstruction and global event description with the CMS detector

    NASA Astrophysics Data System (ADS)

    Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Asilar, E.; Bergauer, T.; Brandstetter, J.; Brondolin, E.; Dragicevic, M.; Erö, J.; Flechl, M.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Hartl, C.; Hörmann, N.; Hrubec, J.; Jeitler, M.; König, A.; Krätschmer, I.; Liko, D.; Matsushita, T.; Mikulec, I.; Rabady, D.; Rad, N.; Rahbaran, B.; Rohringer, H.; Schieck, J.; Strauss, J.; Waltenberger, W.; Wulz, C.-E.; Dvornikov, O.; Makarenko, V.; Mossolov, V.; Suarez Gonzalez, J.; Zykunov, V.; Shumeiko, N.; Alderweireldt, S.; De Wolf, E. A.; Janssen, X.; Lauwers, J.; Van De Klundert, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Van Spilbeeck, A.; Abu Zeid, S.; Blekman, F.; D'Hondt, J.; Daci, N.; De Bruyn, I.; Deroover, K.; Lowette, S.; Moortgat, S.; Moreels, L.; Olbrechts, A.; Python, Q.; Skovpen, K.; Tavernier, S.; Van Doninck, W.; Van Mulders, P.; Van Parijs, I.; Brun, H.; Clerbaux, B.; De Lentdecker, G.; Delannoy, H.; Fasanella, G.; Favart, L.; Goldouzian, R.; Grebenyuk, A.; Karapostoli, G.; Lenzi, T.; Léonard, A.; Luetic, J.; Maerschalk, T.; Marinov, A.; Randle-conde, A.; Seva, T.; Vander Velde, C.; Vanlaer, P.; Vannerom, D.; Yonamine, R.; Zenoni, F.; Zhang, F.; Cornelis, T.; Dobur, D.; Fagot, A.; Gul, M.; Khvastunov, I.; Poyraz, D.; Salva, S.; Schöfbeck, R.; Tytgat, M.; Van Driessche, W.; Yazgan, E.; Zaganidis, N.; Bakhshiansohi, H.; Bondu, O.; Brochet, S.; Bruno, G.; Caudron, A.; De Visscher, S.; Delaere, C.; Delcourt, M.; Francois, B.; Giammanco, A.; Jafari, A.; Komm, M.; Krintiras, G.; Lemaitre, V.; Magitteri, A.; Mertens, A.; Musich, M.; Piotrzkowski, K.; Quertenmont, L.; Selvaggi, M.; Vidal Marono, M.; Wertz, S.; Beliy, N.; Aldá Júnior, W. L.; Alves, F. L.; Alves, G. A.; Brito, L.; Hensel, C.; Moraes, A.; Pol, M. E.; Rebello Teles, P.; Belchior Batista Das Chagas, E.; Carvalho, W.; Chinellato, J.; Custódio, A.; Da Costa, E. M.; Da Silveira, G. G.; Damiao, D. De Jesus; De Oliveira Martins, C.; Fonseca De Souza, S.; Huertas Guativa, L. M.; Malbouisson, H.; Matos Figueiredo, D.; Mora Herrera, C.; Mundim, L.; Nogima, H.; Prado Da Silva, W. L.; Santoro, A.; Sznajder, A.; Tonelli Manganote, E. J.; Torres Da Silva De Araujo, F.; Vilela Pereira, A.; Ahuja, S.; Bernardes, C. A.; Dogra, S.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Mercadante, P. G.; Moon, C. S.; Novaes, S. F.; Padula, Sandra S.; Romero Abad, D.; Ruiz Vargas, J. C.; Aleksandrov, A.; Hadjiiska, R.; Iaydjiev, P.; Rodozov, M.; Stoykova, S.; Sultanov, G.; Vutova, M.; Dimitrov, A.; Glushkov, I.; Litov, L.; Pavlov, B.; Petkov, P.; Fang, W.; Ahmad, M.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Chen, M.; Chen, Y.; Cheng, T.; Jiang, C. H.; Leggat, D.; Liu, Z.; Romeo, F.; Ruan, M.; Shaheen, S. M.; Spiezia, A.; Tao, J.; Wang, C.; Wang, Z.; Zhang, H.; Zhao, J.; Ban, Y.; Chen, G.; Li, Q.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Xu, Z.; Avila, C.; Cabrera, A.; Chaparro Sierra, L. F.; Florez, C.; Gomez, J. P.; González Hernández, C. F.; Ruiz Alvarez, J. D.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Puljak, I.; Ribeiro Cipriano, P. M.; Sculac, T.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Ferencek, D.; Kadija, K.; Mesic, B.; Susa, T.; Ather, M. W.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Rykaczewski, H.; Finger, M.; Finger, M., Jr.; Carrera Jarrin, E.; El-khateeb, E.; Elgammal, S.; Mohamed, A.; Kadastik, M.; Perrini, L.; Raidal, M.; Tiko, A.; Veelken, C.; Eerola, P.; Pekkanen, J.; Voutilainen, M.; Härkönen, J.; Järvinen, T.; Karimäki, V.; Kinnunen, R.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Tuominiemi, J.; Tuovinen, E.; Wendland, L.; Talvitie, J.; Tuuva, T.; Besancon, M.; Couderc, F.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Favaro, C.; Ferri, F.; Ganjour, S.; Ghosh, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Kucher, I.; Locci, E.; Machet, M.; Malcles, J.; Rander, J.; Rosowsky, A.; Titov, M.; Abdulsalam, A.; Antropov, I.; Baffioni, S.; Beaudette, F.; Busson, P.; Cadamuro, L.; Chapon, E.; Charlot, C.; Davignon, O.; Granier de Cassagnac, R.; Jo, M.; Lisniak, S.; Miné, P.; Nguyen, M.; Ochando, C.; Ortona, G.; Paganini, P.; Pigard, P.; Regnard, S.; Salerno, R.; Sirois, Y.; Stahl Leiton, A. G.; Strebler, T.; Yilmaz, Y.; Zabi, A.; Zghiche, A.; Agram, J.-L.; Andrea, J.; Bloch, D.; Brom, J.-M.; Buttignol, M.; Chabert, E. C.; Chanon, N.; Collard, C.; Conte, E.; Coubez, X.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Le Bihan, A.-C.; Van Hove, P.; Gadrat, S.; Beauceron, S.; Bernet, C.; Boudoul, G.; Carrillo Montoya, C. A.; Chierici, R.; Contardo, D.; Courbon, B.; Depasse, P.; El Mamouni, H.; Fay, J.; Gascon, S.; Gouzevitch, M.; Grenier, G.; Ille, B.; Lagarde, F.; Laktineh, I. B.; Lethuillier, M.; Mirabito, L.; Pequegnot, A. L.; Perries, S.; Popov, A.; Sordini, V.; Vander Donckt, M.; Verdier, P.; Viret, S.; Toriashvili, T.; Tsamalaidze, Z.; Autermann, C.; Beranek, S.; Feld, L.; Kiesel, M. K.; Klein, K.; Lipinski, M.; Preuten, M.; Schomakers, C.; Schulz, J.; Verlage, T.; Albert, A.; Brodski, M.; Dietz-Laursonn, E.; Duchardt, D.; Endres, M.; Erdmann, M.; Erdweg, S.; Esch, T.; Fischer, R.; Güth, A.; Hamer, M.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Knutzen, S.; Merschmeyer, M.; Meyer, A.; Millet, P.; Mukherjee, S.; Olschewski, M.; Padeken, K.; Pook, T.; Radziej, M.; Reithler, H.; Rieger, M.; Scheuch, F.; Sonnenschein, L.; Teyssier, D.; Thüer, S.; Cherepanov, V.; Flügge, G.; Kargoll, B.; Kress, T.; Künsken, A.; Lingemann, J.; Müller, T.; Nehrkorn, A.; Nowack, A.; Pistone, C.; Pooth, O.; Stahl, A.; Aldaya Martin, M.; Arndt, T.; Asawatangtrakuldee, C.; Beernaert, K.; Behnke, O.; Behrens, U.; Anuar, A. A. Bin; Borras, K.; Campbell, A.; Connor, P.; Contreras-Campana, C.; Costanza, F.; Diez Pardos, C.; Dolinska, G.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Eren, E.; Gallo, E.; Garay Garcia, J.; Geiser, A.; Gizhko, A.; Grados Luyando, J. M.; Grohsjean, A.; Gunnellini, P.; Harb, A.; Hauk, J.; Hempel, M.; Jung, H.; Kalogeropoulos, A.; Karacheban, O.; Kasemann, M.; Keaveney, J.; Kleinwort, C.; Korol, I.; Krücker, D.; Lange, W.; Lelek, A.; Lenz, T.; Leonard, J.; Lipka, K.; Lobanov, A.; Lohmann, W.; Mankel, R.; Melzer-Pellmann, I.-A.; Meyer, A. B.; Mittag, G.; Mnich, J.; Mussgiller, A.; Pitzl, D.; Placakyte, R.; Raspereza, A.; Roland, B.; Sahin, M. Ö.; Saxena, P.; Schoerner-Sadenius, T.; Spannagel, S.; Stefaniuk, N.; Van Onsem, G. P.; Walsh, R.; Wissing, C.; Blobel, V.; Centis Vignali, M.; Draeger, A. R.; Dreyer, T.; Garutti, E.; Gonzalez, D.; Haller, J.; Hoffmann, M.; Junkes, A.; Klanner, R.; Kogler, R.; Kovalchuk, N.; Kurz, S.; Lapsien, T.; Marchesini, I.; Marconi, D.; Meyer, M.; Niedziela, M.; Nowatschin, D.; Pantaleo, F.; Peiffer, T.; Perieanu, A.; Scharf, C.; Schleper, P.; Schmidt, A.; Schumann, S.; Schwandt, J.; Sonneveld, J.; Stadie, H.; Steinbrück, G.; Stober, F. M.; Stöver, M.; Tholen, H.; Troendle, D.; Usai, E.; Vanelderen, L.; Vanhoefer, A.; Vormwald, B.; Akbiyik, M.; Barth, C.; Baur, S.; Baus, C.; Berger, J.; Butz, E.; Caspart, R.; Chwalek, T.; Colombo, F.; De Boer, W.; Dierlamm, A.; Fink, S.; Freund, B.; Friese, R.; Giffels, M.; Gilbert, A.; Goldenzweig, P.; Haitz, D.; Hartmann, F.; Heindl, S. M.; Husemann, U.; Kassel, F.; Katkov, I.; Kudella, S.; Mildner, H.; Mozer, M. U.; Müller, Th.; Plagge, M.; Quast, G.; Rabbertz, K.; Röcker, S.; Roscher, F.; Schröder, M.; Shvetsov, I.; Sieber, G.; Simonis, H. J.; Ulrich, R.; Wayand, S.; Weber, M.; Weiler, T.; Williamson, S.; Wöhrmann, C.; Wolf, R.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Giakoumopoulou, V. A.; Kyriakis, A.; Loukas, D.; Topsis-Giotis, I.; Kesisoglou, S.; Panagiotou, A.; Saoulidou, N.; Tziaferi, E.; Evangelou, I.; Flouris, G.; Foudas, C.; Kokkas, P.; Loukas, N.; Manthos, N.; Papadopoulos, I.; Paradas, E.; Filipovic, N.; Pasztor, G.; Bencze, G.; Hajdu, C.; Horvath, D.; Sikler, F.; Veszpremi, V.; Vesztergombi, G.; Zsigmond, A. J.; Beni, N.; Czellar, S.; Karancsi, J.; Makovec, A.; Molnar, J.; Szillasi, Z.; Bartók, M.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Choudhury, S.; Komaragiri, J. R.; Bahinipati, S.; Bhowmik, S.; Mal, P.; Mandal, K.; Nayak, A.; Sahoo, D. K.; Sahoo, N.; Swain, S. K.; Bansal, S.; Beri, S. B.; Bhatnagar, V.; Bhawandeep, U.; Chawla, R.; Kalsi, A. K.; Kaur, A.; Kaur, M.; Kumar, R.; Kumari, P.; Mehta, A.; Mittal, M.; Singh, J. B.; Walia, G.; Kumar, Ashok; Bhardwaj, A.; Choudhary, B. C.; Garg, R. B.; Keshri, S.; Kumar, A.; Malhotra, S.; Naimuddin, M.; Ranjan, K.; Sharma, R.; Sharma, V.; Bhattacharya, R.; Bhattacharya, S.; Chatterjee, K.; Dey, S.; Dutt, S.; Dutta, S.; Ghosh, S.; Majumdar, N.; Modak, A.; Mondal, K.; Mukhopadhyay, S.; Nandan, S.; Purohit, A.; Roy, A.; Roy, D.; Chowdhury, S. Roy; Sarkar, S.; Sharan, M.; Thakur, S.; Behera, P. K.; Chudasama, R.; Dutta, D.; Jha, V.; Kumar, V.; Mohanty, A. K.; Netrakanti, P. K.; Pant, L. M.; Shukla, P.; Topkar, A.; Aziz, T.; Dugad, S.; Kole, G.; Mahakud, B.; Mitra, S.; Mohanty, G. B.; Parida, B.; Sur, N.; Sutar, B.; Banerjee, S.; Dewanjee, R. K.; Ganguly, S.; Guchait, M.; Jain, Sa.; Kumar, S.; Maity, M.; Majumder, G.; Mazumdar, K.; Sarkar, T.; Wickramage, N.; Chauhan, S.; Dube, S.; Hegde, V.; Kapoor, A.; Kothekar, K.; Pandey, S.; Rane, A.; Sharma, S.; Chenarani, S.; Eskandari Tadavani, E.; Etesami, S. M.; Khakzad, M.; Najafabadi, M. Mohammadi; Naseri, M.; Paktinat Mehdiabadi, S.; Rezaei Hosseinabadi, F.; Safarzadeh, B.; Zeinali, M.; Felcini, M.; Grunewald, M.; Abbrescia, M.; Calabria, C.; Caputo, C.; Colaleo, A.; Creanza, D.; Cristella, L.; De Filippis, N.; De Palma, M.; Fiore, L.; Iaselli, G.; Maggi, G.; Maggi, M.; Miniello, G.; My, S.; Nuzzo, S.; Pompili, A.; Pugliese, G.; Radogna, R.; Ranieri, A.; Selvaggi, G.; Sharma, A.; Silvestris, L.; Venditti, R.; Verwilligen, P.; Abbiendi, G.; Battilana, C.; Bonacorsi, D.; Braibant-Giacomelli, S.; Brigliadori, L.; Campanini, R.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Chhibra, S. S.; Codispoti, G.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Montanari, A.; Navarria, F. L.; Perrotta, A.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Albergo, S.; Costa, S.; Di Mattia, A.; Giordano, F.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Lenzi, P.; Meschini, M.; Paoletti, S.; Russo, L.; Sguazzoni, G.; Strom, D.; Viliani, L.; Benussi, L.; Bianco, S.; Fabbri, F.; Piccolo, D.; Primavera, F.; Calvelli, V.; Ferro, F.; Monge, M. R.; Robutti, E.; Tosi, S.; Brianza, L.; Brivio, F.; Ciriolo, V.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Ghezzi, A.; Govoni, P.; Malberti, M.; Malvezzi, S.; Manzoni, R. A.; Menasce, D.; Moroni, L.; Paganoni, M.; Pedrini, D.; Pigazzini, S.; Ragazzi, S.; Tabarelli de Fatis, T.; Buontempo, S.; Cavallo, N.; De Nardo, G.; Di Guida, S.; Esposito, M.; Fabozzi, F.; Fienga, F.; Iorio, A. O. M.; Lanza, G.; Lista, L.; Meola, S.; Paolucci, P.; Sciacca, C.; Thyssen, F.; Azzi, P.; Bacchetta, N.; Benato, L.; Bisello, D.; Boletti, A.; Carlin, R.; Carvalho Antunes De Oliveira, A.; Checchia, P.; Dall'Osso, M.; De Castro Manzano, P.; Dorigo, T.; Gozzelino, A.; Lacaprara, S.; Margoni, M.; Meneguzzo, A. T.; Passaseo, M.; Pazzini, J.; Pozzobon, N.; Ronchese, P.; Rossin, R.; Simonetto, F.; Torassa, E.; Ventura, S.; Zanetti, M.; Zotto, P.; Zumerle, G.; Braghieri, A.; Fallavollita, F.; Magnani, A.; Montagna, P.; Ratti, S. P.; Re, V.; Ressegotti, M.; Riccardi, C.; Salvini, P.; Vai, I.; Vitulo, P.; Alunni Solestizi, L.; Bilei, G. M.; Ciangottini, D.; Fanò, L.; Lariccia, P.; Leonardi, R.; Mantovani, G.; Mariani, V.; Menichelli, M.; Saha, A.; Santocchia, A.; Androsov, K.; Azzurri, P.; Bagliesi, G.; Bernardini, J.; Boccali, T.; Castaldi, R.; Ciocci, M. A.; Dell'Orso, R.; Fedi, G.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Martini, L.; Messineo, A.; Palla, F.; Rizzi, A.; Savoy-Navarro, A.; Spagnolo, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Barone, L.; Cavallari, F.; Cipriani, M.; Del Re, D.; Diemoz, M.; Gelli, S.; Longo, E.; Margaroli, F.; Marzocchi, B.; Meridiani, P.; Organtini, G.; Paramatti, R.; Preiato, F.; Rahatlou, S.; Rovelli, C.; Santanastasio, F.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Bartosik, N.; Bellan, R.; Biino, C.; Cartiglia, N.; Cenna, F.; Costa, M.; Covarelli, R.; Degano, A.; Demaria, N.; Finco, L.; Kiani, B.; Mariotti, C.; Maselli, S.; Migliore, E.; Monaco, V.; Monteil, E.; Monteno, M.; Obertino, M. M.; Pacher, L.; Pastrone, N.; Pelliccioni, M.; Pinna Angioni, G. L.; Ravera, F.; Romero, A.; Ruspa, M.; Sacchi, R.; Shchelina, K.; Sola, V.; Solano, A.; Staiano, A.; Traczyk, P.; Belforte, S.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; Zanetti, A.; Kim, D. H.; Kim, G. N.; Kim, M. S.; Lee, S.; Lee, S. W.; Oh, Y. D.; Sekmen, S.; Son, D. C.; Yang, Y. C.; Lee, A.; Kim, H.; Brochero Cifuentes, J. A.; Kim, T. J.; Cho, S.; Choi, S.; Go, Y.; Gyun, D.; Ha, S.; Hong, B.; Jo, Y.; Kim, Y.; Lee, K.; Lee, K. S.; Lee, S.; Lim, J.; Park, S. K.; Roh, Y.; Almond, J.; Kim, J.; Lee, H.; Oh, S. B.; Radburn-Smith, B. C.; Seo, S. h.; Yang, U. K.; Yoo, H. D.; Yu, G. B.; Choi, M.; Kim, H.; Kim, J. H.; Lee, J. S. H.; Park, I. C.; Ryu, G.; Ryu, M. S.; Choi, Y.; Goh, J.; Hwang, C.; Lee, J.; Yu, I.; Dudenas, V.; Juodagalvis, A.; Vaitkus, J.; Ahmed, I.; Ibrahim, Z. A.; Ali, M. A. B. Md; Mohamad Idris, F.; Abdullah, W. A. T. Wan; Yusli, M. N.; Zolkapli, Z.; Castilla-Valdez, H.; De La Cruz-Burelo, E.; Heredia-De La Cruz, I.; Hernandez-Almada, A.; Lopez-Fernandez, R.; Magaña Villalba, R.; Mejia Guisao, J.; Sanchez-Hernandez, A.; Carrillo Moreno, S.; Oropeza Barrera, C.; Vazquez Valencia, F.; Carpinteyro, S.; Pedraza, I.; Salazar Ibarguen, H. A.; Uribe Estrada, C.; Morelos Pineda, A.; Krofcheck, D.; Butler, P. H.; Ahmad, A.; Ahmad, M.; Hassan, Q.; Hoorani, H. R.; Khan, W. A.; Saddique, A.; Shah, M. A.; Shoaib, M.; Waqas, M.; Bialkowska, H.; Bluj, M.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Romanowska-Rybinska, K.; Szleper, M.; Zalewski, P.; Bunkowski, K.; Byszuk, A.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Olszewski, M.; Walczak, M.; Bargassa, P.; Silva, C. Beirão Da Cruz E.; Calpas, B.; Di Francesco, A.; Faccioli, P.; Gallinaro, M.; Hollar, J.; Leonardo, N.; Lloret Iglesias, L.; Nemallapudi, M. V.; Seixas, J.; Toldaiev, O.; Vadruccio, D.; Varela, J.; Afanasiev, S.; Bunin, P.; Gavrilenko, M.; Golutvin, I.; Gorbunov, I.; Kamenev, A.; Karjavin, V.; Lanev, A.; Malakhov, A.; Matveev, V.; Palichik, V.; Perelygin, V.; Shmatov, S.; Shulha, S.; Skatchkov, N.; Smirnov, V.; Voytishin, N.; Zarubin, A.; Chtchipounov, L.; Golovtsov, V.; Ivanov, Y.; Kim, V.; Kuznetsova, E.; Murzin, V.; Oreshkin, V.; Sulimov, V.; Vorobyev, A.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Karneyeu, A.; Kirsanov, M.; Krasnikov, N.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Pozdnyakov, I.; Safronov, G.; Spiridonov, A.; Toms, M.; Vlasov, E.; Zhokin, A.; Aushev, T.; Bylinkin, A.; Chadeeva, M.; Markin, O.; Tarkovskii, E.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Leonidov, A.; Terkulov, A.; Baskakov, A.; Belyaev, A.; Boos, E.; Dubinin, M.; Dudko, L.; Ershov, A.; Gribushin, A.; Kaminskiy, A.; Klyukhin, V.; Kodolova, O.; Lokhtin, I.; Miagkov, I.; Obraztsov, S.; Petrushanko, S.; Savrin, V.; Blinov, V.; Skovpen, Y.; Shtol, D.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Elumakhov, D.; Kachanov, V.; Kalinin, A.; Konstantinov, D.; Krychkine, V.; Petrov, V.; Ryutin, R.; Sobol, A.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Cirkovic, P.; Devetak, D.; Dordevic, M.; Milosevic, J.; Rekovic, V.; Alcaraz Maestre, J.; Barrio Luna, M.; Calvo, E.; Cerrada, M.; Chamizo Llatas, M.; Colino, N.; De La Cruz, B.; Delgado Peris, A.; Escalante Del Valle, A.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Navarro De Martino, E.; Pérez-Calero Yzquierdo, A.; Puerta Pelayo, J.; Quintario Olmeda, A.; Redondo, I.; Romero, L.; Soares, M. S.; de Trocóniz, J. F.; Missiroli, M.; Moran, D.; Cuevas, J.; Erice, C.; Fernandez Menendez, J.; Gonzalez Caballero, I.; González Fernández, J. R.; Palencia Cortezon, E.; Sanchez Cruz, S.; Suárez Andrés, I.; Vischia, P.; Vizan Garcia, J. M.; Cabrillo, I. J.; Calderon, A.; Curras, E.; Fernandez, M.; Garcia-Ferrero, J.; Gomez, G.; Lopez Virto, A.; Marco, J.; Martinez Rivero, C.; Matorras, F.; Piedra Gomez, J.; Rodrigo, T.; Ruiz-Jimeno, A.; Scodellaro, L.; Trevisani, N.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Auffray, E.; Auzinger, G.; Baillon, P.; Ball, A. H.; Barney, D.; Bloch, P.; Bocci, A.; Botta, C.; Camporesi, T.; Castello, R.; Cepeda, M.; Cerminara, G.; Chen, Y.; Cimmino, A.; d'Enterria, D.; Dabrowski, A.; Daponte, V.; David, A.; De Gruttola, M.; De Roeck, A.; Di Marco, E.; Dobson, M.; Dorney, B.; du Pree, T.; Duggan, D.; Dünser, M.; Dupont, N.; Elliott-Peisert, A.; Everaerts, P.; Fartoukh, S.; Franzoni, G.; Fulcher, J.; Funk, W.; Gigi, D.; Gill, K.; Girone, M.; Glege, F.; Gulhan, D.; Gundacker, S.; Guthoff, M.; Harris, P.; Hegeman, J.; Innocente, V.; Janot, P.; Kieseler, J.; Kirschenmann, H.; Knünz, V.; Kornmayer, A.; Kortelainen, M. J.; Kousouris, K.; Krammer, M.; Lange, C.; Lecoq, P.; Lourenço, C.; Lucchini, M. T.; Malgeri, L.; Mannelli, M.; Martelli, A.; Meijers, F.; Merlin, J. A.; Mersi, S.; Meschi, E.; Milenovic, P.; Moortgat, F.; Morovic, S.; Mulders, M.; Neugebauer, H.; Orfanelli, S.; Orsini, L.; Pape, L.; Perez, E.; Peruzzi, M.; Petrilli, A.; Petrucciani, G.; Pfeiffer, A.; Pierini, M.; Racz, A.; Reis, T.; Rolandi, G.; Rovere, M.; Sakulin, H.; Sauvan, J. B.; Schäfer, C.; Schwick, C.; Seidel, M.; Sharma, A.; Silva, P.; Sphicas, P.; Steggemann, J.; Stoye, M.; Takahashi, Y.; Tosi, M.; Treille, D.; Triossi, A.; Tsirou, A.; Veckalns, V.; Veres, G. I.; Verweij, M.; Wardle, N.; Wöhri, H. K.; Zagozdzinska, A.; Zeuner, W. D.; Bertl, W.; Deiters, K.; Erdmann, W.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; Kotlinski, D.; Langenegger, U.; Rohe, T.; Wiederkehr, S. A.; Bachmair, F.; Bäni, L.; Bianchini, L.; Casal, B.; Dissertori, G.; Dittmar, M.; Donegà, M.; Grab, C.; Heidegger, C.; Hits, D.; Hoss, J.; Kasieczka, G.; Lustermann, W.; Mangano, B.; Marionneau, M.; Martinez Ruiz del Arbol, P.; Masciovecchio, M.; Meinhard, M. T.; Meister, D.; Micheli, F.; Musella, P.; Nessi-Tedaldi, F.; Pandolfi, F.; Pata, J.; Pauss, F.; Perrin, G.; Perrozzi, L.; Quittnat, M.; Rossini, M.; Schönenberger, M.; Starodumov, A.; Tavolaro, V. R.; Theofilatos, K.; Wallny, R.; Aarrestad, T. K.; Amsler, C.; Caminada, L.; Canelli, M. F.; De Cosa, A.; Donato, S.; Galloni, C.; Hinzmann, A.; Hreus, T.; Kilminster, B.; Ngadiuba, J.; Pinna, D.; Rauco, G.; Robmann, P.; Salerno, D.; Seitz, C.; Yang, Y.; Zucchetta, A.; Candelise, V.; Doan, T. H.; Jain, Sh.; Khurana, R.; Konyushikhin, M.; Kuo, C. M.; Lin, W.; Pozdnyakov, A.; Yu, S. S.; Kumar, Arun; Chang, P.; Chang, Y. H.; Chao, Y.; Chen, K. F.; Chen, P. H.; Fiori, F.; Hou, W.-S.; Hsiung, Y.; Liu, Y. F.; Lu, R.-S.; Miñano Moya, M.; Paganis, E.; Psallidas, A.; Tsai, J. f.; Asavapibhop, B.; Singh, G.; Srimanobhas, N.; Suwonjandee, N.; Adiguzel, A.; Damarseckin, S.; Demiroglu, Z. S.; Dozen, C.; Eskut, E.; Girgis, S.; Gokbulut, G.; Guler, Y.; Hos, I.; Kangal, E. E.; Kara, O.; Kayis Topaksu, A.; Kiminsu, U.; Oglakci, M.; Onengut, G.; Ozdemir, K.; Ozturk, S.; Polatoz, A.; Tali, B.; Turkcapar, S.; Zorbakir, I. S.; Zorbilmez, C.; Bilin, B.; Bilmis, S.; Isildak, B.; Karapinar, G.; Yalvac, M.; Zeyrek, M.; Gülmez, E.; Kaya, M.; Kaya, O.; Yetkin, E. A.; Yetkin, T.; Cakir, A.; Cankocak, K.; Sen, S.; Grynyov, B.; Levchuk, L.; Sorokin, P.; Aggleton, R.; Ball, F.; Beck, L.; Brooke, J. J.; Burns, D.; Clement, E.; Cussans, D.; Flacher, H.; Goldstein, J.; Grimes, M.; Heath, G. P.; Heath, H. F.; Jacob, J.; Kreczko, L.; Lucas, C.; Newbold, D. M.; Paramesvaran, S.; Poll, A.; Sakuma, T.; Seif El Nasr-storey, S.; Smith, D.; Smith, V. J.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Calligaris, L.; Cieri, D.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Olaiya, E.; Petyt, D.; Shepherd-Themistocleous, C. H.; Thea, A.; Tomalin, I. R.; Williams, T.; Baber, M.; Bainbridge, R.; Buchmuller, O.; Bundock, A.; Casasso, S.; Citron, M.; Colling, D.; Corpe, L.; Dauncey, P.; Davies, G.; De Wit, A.; Della Negra, M.; Di Maria, R.; Dunne, P.; Elwood, A.; Futyan, D.; Haddad, Y.; Hall, G.; Iles, G.; James, T.; Lane, R.; Laner, C.; Lyons, L.; Magnan, A.-M.; Malik, S.; Mastrolorenzo, L.; Nash, J.; Nikitenko, A.; Pela, J.; Penning, B.; Pesaresi, M.; Raymond, D. M.; Richards, A.; Rose, A.; Scott, E.; Seez, C.; Summers, S.; Tapper, A.; Uchida, K.; Vazquez Acosta, M.; Virdee, T.; Wright, J.; Zenz, S. C.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Reid, I. D.; Symonds, P.; Teodorescu, L.; Turner, M.; Borzou, A.; Call, K.; Dittmann, J.; Hatakeyama, K.; Liu, H.; Pastika, N.; Bartek, R.; Dominguez, A.; Buccilli, A.; Cooper, S. I.; Henderson, C.; Rumerio, P.; West, C.; Arcaro, D.; Avetisyan, A.; Bose, T.; Gastler, D.; Rankin, D.; Richardson, C.; Rohlf, J.; Sulak, L.; Zou, D.; Benelli, G.; Cutts, D.; Garabedian, A.; Hakala, J.; Heintz, U.; Hogan, J. M.; Jesus, O.; Kwok, K. H. M.; Laird, E.; Landsberg, G.; Mao, Z.; Narain, M.; Piperov, S.; Sagir, S.; Spencer, E.; Syarif, R.; Breedon, R.; Burns, D.; Calderon De La Barca Sanchez, M.; Chauhan, S.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Erbacher, R.; Flores, C.; Funk, G.; Gardner, M.; Ko, W.; Lander, R.; Mclean, C.; Mulhearn, M.; Pellett, D.; Pilot, J.; Shalhout, S.; Shi, M.; Smith, J.; Squires, M.; Stolp, D.; Tos, K.; Tripathi, M.; Bachtis, M.; Bravo, C.; Cousins, R.; Dasgupta, A.; Florent, A.; Hauser, J.; Ignatenko, M.; Mccoll, N.; Saltzberg, D.; Schnaible, C.; Valuev, V.; Weber, M.; Bouvier, E.; Burt, K.; Clare, R.; Ellison, J.; Gary, J. W.; Ghiasi Shirazi, S. M. A.; Hanson, G.; Heilman, J.; Jandir, P.; Kennedy, E.; Lacroix, F.; Long, O. R.; Olmedo Negrete, M.; Paneva, M. I.; Shrinivas, A.; Si, W.; Wei, H.; Wimpenny, S.; Yates, B. R.; Branson, J. G.; Cerati, G. B.; Cittolin, S.; Derdzinski, M.; Gerosa, R.; Holzner, A.; Klein, D.; Krutelyov, V.; Letts, J.; Macneill, I.; Olivito, D.; Padhi, S.; Pieri, M.; Sani, M.; Sharma, V.; Simon, S.; Tadel, M.; Vartak, A.; Wasserbaech, S.; Welke, C.; Wood, J.; Würthwein, F.; Yagil, A.; Zevi Della Porta, G.; Amin, N.; Bhandari, R.; Bradmiller-Feld, J.; Campagnari, C.; Dishaw, A.; Dutta, V.; Sevilla, M. Franco; George, C.; Golf, F.; Gouskos, L.; Gran, J.; Heller, R.; Incandela, J.; Mullin, S. D.; Ovcharova, A.; Qu, H.; Richman, J.; Stuart, D.; Suarez, I.; Yoo, J.; Anderson, D.; Bendavid, J.; Bornheim, A.; Bunn, J.; Duarte, J.; Lawhorn, J. M.; Mott, A.; Newman, H. B.; Pena, C.; Spiropulu, M.; Vlimant, J. R.; Xie, S.; Zhu, R. Y.; Andrews, M. B.; Ferguson, T.; Paulini, M.; Russ, J.; Sun, M.; Vogel, H.; Vorobiev, I.; Weinberg, M.; Cumalat, J. P.; Ford, W. T.; Jensen, F.; Johnson, A.; Krohn, M.; Leontsinis, S.; Mulholland, T.; Stenson, K.; Wagner, S. R.; Alexander, J.; Chaves, J.; Chu, J.; Dittmer, S.; Mcdermott, K.; Mirman, N.; Patterson, J. R.; Rinkevicius, A.; Ryd, A.; Skinnari, L.; Soffi, L.; Tan, S. M.; Tao, Z.; Thom, J.; Tucker, J.; Wittich, P.; Zientek, M.; Winn, D.; Abdullin, S.; Albrow, M.; Apollinari, G.; Apresyan, A.; Banerjee, S.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Bolla, G.; Burkett, K.; Butler, J. N.; Cheung, H. W. K.; Chlebana, F.; Cihangir, S.; Cremonesi, M.; Elvira, V. D.; Fisk, I.; Freeman, J.; Gottschalk, E.; Gray, L.; Green, D.; Grünendahl, S.; Gutsche, O.; Hare, D.; Harris, R. M.; Hasegawa, S.; Hirschauer, J.; Hu, Z.; Jayatilaka, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Klima, B.; Kreis, B.; Lammel, S.; Linacre, J.; Lincoln, D.; Lipton, R.; Liu, M.; Liu, T.; Lopes De Sá, R.; Lykken, J.; Maeshima, K.; Magini, N.; Marraffino, J. M.; Maruyama, S.; Mason, D.; McBride, P.; Merkel, P.; Mrenna, S.; Nahn, S.; O'Dell, V.; Pedro, K.; Prokofyev, O.; Rakness, G.; Ristori, L.; Sexton-Kennedy, E.; Soha, A.; Spalding, W. J.; Spiegel, L.; Stoynev, S.; Strait, J.; Strobbe, N.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vernieri, C.; Verzocchi, M.; Vidal, R.; Wang, M.; Weber, H. A.; Whitbeck, A.; Wu, Y.; Acosta, D.; Avery, P.; Bortignon, P.; Bourilkov, D.; Brinkerhoff, A.; Carnes, A.; Carver, M.; Curry, D.; Das, S.; Field, R. D.; Furic, I. K.; Konigsberg, J.; Korytov, A.; Low, J. F.; Ma, P.; Matchev, K.; Mei, H.; Mitselmakher, G.; Rank, D.; Shchutska, L.; Sperka, D.; Thomas, L.; Wang, J.; Wang, S.; Yelton, J.; Linn, S.; Markowitz, P.; Martinez, G.; Rodriguez, J. L.; Ackert, A.; Adams, T.; Askew, A.; Bein, S.; Hagopian, S.; Hagopian, V.; Johnson, K. F.; Kolberg, T.; Perry, T.; Prosper, H.; Santra, A.; Yohay, R.; Baarmand, M. M.; Bhopatkar, V.; Colafranceschi, S.; Hohlmann, M.; Noonan, D.; Roy, T.; Yumiceva, F.; Adams, M. R.; Apanasevich, L.; Berry, D.; Betts, R. R.; Cavanaugh, R.; Chen, X.; Evdokimov, O.; Gerber, C. E.; Hangal, D. A.; Hofman, D. J.; Jung, K.; Kamin, J.; Sandoval Gonzalez, I. D.; Trauger, H.; Varelas, N.; Wang, H.; Wu, Z.; Zakaria, M.; Zhang, J.; Bilki, B.; Clarida, W.; Dilsiz, K.; Durgut, S.; Gandrajula, R. P.; Haytmyradov, M.; Khristenko, V.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Snyder, C.; Tiras, E.; Wetzel, J.; Yi, K.; Blumenfeld, B.; Cocoros, A.; Eminizer, N.; Fehling, D.; Feng, L.; Gritsan, A. V.; Maksimovic, P.; Roskes, J.; Sarica, U.; Swartz, M.; Xiao, M.; You, C.; Al-bataineh, A.; Baringer, P.; Bean, A.; Boren, S.; Bowen, J.; Castle, J.; Forthomme, L.; Khalil, S.; Kropivnitskaya, A.; Majumder, D.; Mcbrayer, W.; Murray, M.; Sanders, S.; Stringer, R.; Tapia Takaki, J. D.; Wang, Q.; Ivanov, A.; Kaadze, K.; Maravin, Y.; Mohammadi, A.; Saini, L. K.; Skhirtladze, N.; Toda, S.; Rebassoo, F.; Wright, D.; Anelli, C.; Baden, A.; Baron, O.; Belloni, A.; Calvert, B.; Eno, S. C.; Ferraioli, C.; Gomez, J. A.; Hadley, N. J.; Jabeen, S.; Jeng, G. Y.; Kellogg, R. G.; Kunkle, J.; Mignerey, A. C.; Ricci-Tam, F.; Shin, Y. H.; Skuja, A.; Tonjes, M. B.; Tonwar, S. C.; Abercrombie, D.; Allen, B.; Apyan, A.; Azzolini, V.; Barbieri, R.; Baty, A.; Bi, R.; Bierwagen, K.; Brandt, S.; Busza, W.; Cali, I. A.; D'Alfonso, M.; Demiragli, Z.; Gomez Ceballos, G.; Goncharov, M.; Hsu, D.; Iiyama, Y.; Innocenti, G. M.; Klute, M.; Kovalskyi, D.; Krajczar, K.; Lai, Y. S.; Lee, Y.-J.; Levin, A.; Luckey, P. D.; Maier, B.; Marini, A. C.; Mcginn, C.; Mironov, C.; Narayanan, S.; Niu, X.; Paus, C.; Roland, C.; Roland, G.; Salfeld-Nebgen, J.; Stephans, G. S. F.; Tatar, K.; Velicanu, D.; Wang, J.; Wang, T. W.; Wyslouch, B.; Benvenuti, A. C.; Chatterjee, R. M.; Evans, A.; Hansen, P.; Kalafut, S.; Kao, S. C.; Kubota, Y.; Lesko, Z.; Mans, J.; Nourbakhsh, S.; Ruckstuhl, N.; Rusack, R.; Tambe, N.; Turkewitz, J.; Acosta, J. G.; Oliveros, S.; Avdeeva, E.; Bloom, K.; Claes, D. R.; Fangmeier, C.; Gonzalez Suarez, R.; Kamalieddin, R.; Kravchenko, I.; Malta Rodrigues, A.; Monroy, J.; Siado, J. E.; Snow, G. R.; Stieger, B.; Alyari, M.; Dolen, J.; Godshalk, A.; Harrington, C.; Iashvili, I.; Kaisen, J.; Nguyen, D.; Parker, A.; Rappoccio, S.; Roozbahani, B.; Alverson, G.; Barberis, E.; Hortiangtham, A.; Massironi, A.; Morse, D. M.; Nash, D.; Orimoto, T.; Teixeira De Lima, R.; Trocino, D.; Wang, R.-J.; Wood, D.; Bhattacharya, S.; Charaf, O.; Hahn, K. A.; Mucia, N.; Odell, N.; Pollack, B.; Schmitt, M. H.; Sung, K.; Trovato, M.; Velasco, M.; Dev, N.; Hildreth, M.; Hurtado Anampa, K.; Jessop, C.; Karmgard, D. J.; Kellams, N.; Lannon, K.; Marinelli, N.; Meng, F.; Mueller, C.; Musienko, Y.; Planer, M.; Reinsvold, A.; Ruchti, R.; Rupprecht, N.; Smith, G.; Taroni, S.; Wayne, M.; Wolf, M.; Woodard, A.; Alimena, J.; Antonelli, L.; Bylsma, B.; Durkin, L. S.; Flowers, S.; Francis, B.; Hart, A.; Hill, C.; Ji, W.; Liu, B.; Luo, W.; Puigh, D.; Winer, B. L.; Wulsin, H. W.; Cooperstein, S.; Driga, O.; Elmer, P.; Hardenbrook, J.; Hebda, P.; Lange, D.; Luo, J.; Marlow, D.; Medvedeva, T.; Mei, K.; Ojalvo, I.; Olsen, J.; Palmer, C.; Piroué, P.; Stickland, D.; Svyatkovskiy, A.; Tully, C.; Malik, S.; Barker, A.; Barnes, V. E.; Folgueras, S.; Gutay, L.; Jha, M. K.; Jones, M.; Jung, A. W.; Khatiwada, A.; Miller, D. H.; Neumeister, N.; Schulte, J. F.; Shi, X.; Sun, J.; Wang, F.; Xie, W.; Parashar, N.; Stupak, J.; Adair, A.; Akgun, B.; Chen, Z.; Ecklund, K. M.; Geurts, F. J. M.; Guilbaud, M.; Li, W.; Michlin, B.; Northup, M.; Padley, B. P.; Roberts, J.; Rorie, J.; Tu, Z.; Zabel, J.; Betchart, B.; Bodek, A.; de Barbaro, P.; Demina, R.; Duh, Y. t.; Ferbel, T.; Galanti, M.; Garcia-Bellido, A.; Han, J.; Hindrichs, O.; Khukhunaishvili, A.; Lo, K. H.; Tan, P.; Verzetti, M.; Agapitos, A.; Chou, J. P.; Gershtein, Y.; Gómez Espinosa, T. A.; Halkiadakis, E.; Heindl, M.; Hughes, E.; Kaplan, S.; Kunnawalkam Elayavalli, R.; Kyriacou, S.; Lath, A.; Montalvo, R.; Nash, K.; Osherson, M.; Saka, H.; Salur, S.; Schnetzer, S.; Sheffield, D.; Somalwar, S.; Stone, R.; Thomas, S.; Thomassen, P.; Walker, M.; Delannoy, A. G.; Foerster, M.; Heideman, J.; Riley, G.; Rose, K.; Spanier, S.; Thapa, K.; Bouhali, O.; Celik, A.; Dalchenko, M.; De Mattia, M.; Delgado, A.; Dildick, S.; Eusebi, R.; Gilmore, J.; Huang, T.; Juska, E.; Kamon, T.; Mueller, R.; Pakhotin, Y.; Patel, R.; Perloff, A.; Perniè, L.; Rathjens, D.; Safonov, A.; Tatarinov, A.; Ulmer, K. A.; Akchurin, N.; Damgov, J.; De Guio, F.; Dragoiu, C.; Dudero, P. R.; Faulkner, J.; Gurpinar, E.; Kunori, S.; Lamichhane, K.; Lee, S. W.; Libeiro, T.; Peltola, T.; Undleeb, S.; Volobouev, I.; Wang, Z.; Greene, S.; Gurrola, A.; Janjam, R.; Johns, W.; Maguire, C.; Melo, A.; Ni, H.; Sheldon, P.; Tuo, S.; Velkovska, J.; Xu, Q.; Arenton, M. W.; Barria, P.; Cox, B.; Hirosky, R.; Ledovskoy, A.; Li, H.; Neu, C.; Sinthuprasith, T.; Sun, X.; Wang, Y.; Wolfe, E.; Xia, F.; Clarke, C.; Harr, R.; Karchin, P. E.; Sturdy, J.; Zaleski, S.; Belknap, D. A.; Buchanan, J.; Caillol, C.; Dasu, S.; Dodd, L.; Duric, S.; Gomber, B.; Grothe, M.; Herndon, M.; Hervé, A.; Hussain, U.; Klabbers, P.; Lanaro, A.; Levine, A.; Long, K.; Loveless, R.; Pierro, G. A.; Polese, G.; Ruggles, T.; Savin, A.; Smith, N.; Smith, W. H.; Taylor, D.; Woods, N.

    2017-10-01

    The CMS apparatus was identified, a few years before the start of the LHC operation at CERN, to feature properties well suited to particle-flow (PF) reconstruction: a highly-segmented tracker, a fine-grained electromagnetic calorimeter, a hermetic hadron calorimeter, a strong magnetic field, and an excellent muon spectrometer. A fully-fledged PF reconstruction algorithm tuned to the CMS detector was therefore developed and has been consistently used in physics analyses for the first time at a hadron collider. For each collision, the comprehensive list of final-state particles identified and reconstructed by the algorithm provides a global event description that leads to unprecedented CMS performance for jet and hadronic τ decay reconstruction, missing transverse momentum determination, and electron and muon identification. This approach also allows particles from pileup interactions to be identified and enables efficient pileup mitigation methods. The data collected by CMS at a centre-of-mass energy of 8\\TeV show excellent agreement with the simulation and confirm the superior PF performance at least up to an average of 20 pileup interactions.

  13. Particle-flow reconstruction and global event description with the CMS detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sirunyan, A.M.; et al.

    2017-10-06

    The CMS apparatus was identified, a few years before the start of the LHC operation at CERN, to feature properties well suited to particle-flow (PF) reconstruction: a highly-segmented tracker, a fine-grained electromagnetic calorimeter, a hermetic hadron calorimeter, a strong magnetic field, and an excellent muon spectrometer. A fully-fledged PF reconstruction algorithm tuned to the CMS detector was therefore developed and has been consistently used in physics analyses for the first time at a hadron collider. For each collision, the comprehensive list of final-state particles identified and reconstructed by the algorithm provides a global event description that leads to unprecedented CMSmore » performance for jet and hadronic tau decay reconstruction, missing transverse momentum determination, and electron and muon identification. This approach also allows particles from pileup interactions to be identified and enables efficient pileup mitigation methods. The data collected by CMS at a centre-of-mass energy of 8 TeV show excellent agreement with the simulation and confirm the superior PF performance at least up to an average of 20 pileup interactions.« less

  14. SU-E-T-58: A Novel Monte Carlo Photon Transport Simulation Scheme and Its Application in Cone Beam CT Projection Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Southern Medical University, Guangzhou; Tian, Z

    Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source.more » After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle transport simulations.« less

  15. A new method for shape and texture classification of orthopedic wear nanoparticles.

    PubMed

    Zhang, Dongning; Page, Janet R; Kavanaugh, Aaron E; Billi, Fabrizio

    2012-09-27

    Detailed morphologic analysis of particles produced during wear of orthopedic implants is important in determining a correlation among material, wear, and biological effects. However, the use of simple shape descriptors is insufficient to categorize the data and to compare the nature of wear particles generated by different implants. An approach based on Discrete Fourier Transform (DFT) is presented for describing particle shape and surface texture. Four metal-on-metal bearing couples were tested in an orbital wear simulator under standard and adverse (steep-angled cups) wear simulator conditions. Digitized Scanning Electron Microscope (SEM) images of the wear particles were imported into MATLAB to carry out Fourier descriptor calculations via a specifically developed algorithm. The descriptors were then used for studying particle characteristics (shape and texture) as well as for cluster classification. Analysis of the particles demonstrated the validity of the proposed model by showing that steep-angle Co-Cr wear particles were more asymmetric, compressed, extended, triangular, square, and roughened at 3 Mc than after 0.25 Mc. In contrast, particles from standard angle samples were only more compressed and extended after 3 Mc compared to 0.25 Mc. Cluster analysis revealed that the 0.25 Mc steep-angle particle distribution was a subset of the 3 Mc distribution.

  16. A particle swarm optimization variant with an inner variable learning strategy.

    PubMed

    Wu, Guohua; Pedrycz, Witold; Ma, Manhao; Qiu, Dishan; Li, Haifeng; Liu, Jin

    2014-01-01

    Although Particle Swarm Optimization (PSO) has demonstrated competitive performance in solving global optimization problems, it exhibits some limitations when dealing with optimization problems with high dimensionality and complex landscape. In this paper, we integrate some problem-oriented knowledge into the design of a certain PSO variant. The resulting novel PSO algorithm with an inner variable learning strategy (PSO-IVL) is particularly efficient for optimizing functions with symmetric variables. Symmetric variables of the optimized function have to satisfy a certain quantitative relation. Based on this knowledge, the inner variable learning (IVL) strategy helps the particle to inspect the relation among its inner variables, determine the exemplar variable for all other variables, and then make each variable learn from the exemplar variable in terms of their quantitative relations. In addition, we design a new trap detection and jumping out strategy to help particles escape from local optima. The trap detection operation is employed at the level of individual particles whereas the trap jumping out strategy is adaptive in its nature. Experimental simulations completed for some representative optimization functions demonstrate the excellent performance of PSO-IVL. The effectiveness of the PSO-IVL stresses a usefulness of augmenting evolutionary algorithms by problem-oriented domain knowledge.

  17. Revisiting Antarctic Ozone Depletion

    NASA Astrophysics Data System (ADS)

    Grooß, Jens-Uwe; Tritscher, Ines; Müller, Rolf

    2015-04-01

    Antarctic ozone depletion is known for almost three decades and it has been well settled that it is caused by chlorine catalysed ozone depletion inside the polar vortex. However, there are still some details, which need to be clarified. In particular, there is a current debate on the relative importance of liquid aerosol and crystalline NAT and ice particles for chlorine activation. Particles have a threefold impact on polar chlorine chemistry, temporary removal of HNO3 from the gas-phase (uptake), permanent removal of HNO3 from the atmosphere (denitrification), and chlorine activation through heterogeneous reactions. We have performed simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS) employing a recently developed algorithm for saturation-dependent NAT nucleation for the Antarctic winters 2011 and 2012. The simulation results are compared with different satellite observations. With the help of these simulations, we investigate the role of the different processes responsible for chlorine activation and ozone depletion. Especially the sensitivity with respect to the particle type has been investigated. If temperatures are artificially forced to only allow cold binary liquid aerosol, the simulation still shows significant chlorine activation and ozone depletion. The results of the 3-D Chemical Transport Model CLaMS simulations differ from purely Lagrangian longtime trajectory box model simulations which indicates the importance of mixing processes.

  18. Implementation of a 3D version of ponderomotive guiding center solver in particle-in-cell code OSIRIS

    NASA Astrophysics Data System (ADS)

    Helm, Anton; Vieira, Jorge; Silva, Luis; Fonseca, Ricardo

    2016-10-01

    Laser-driven accelerators gained an increased attention over the past decades. Typical modeling techniques for laser wakefield acceleration (LWFA) are based on particle-in-cell (PIC) simulations. PIC simulations, however, are very computationally expensive due to the disparity of the relevant scales ranging from the laser wavelength, in the micrometer range, to the acceleration length, currently beyond the ten centimeter range. To minimize the gap between these despair scales the ponderomotive guiding center (PGC) algorithm is a promising approach. By describing the evolution of the laser pulse envelope separately, only the scales larger than the plasma wavelength are required to be resolved in the PGC algorithm, leading to speedups in several orders of magnitude. Previous work was limited to two dimensions. Here we present the implementation of the 3D version of a PGC solver into the massively parallel, fully relativistic PIC code OSIRIS. We extended the solver to include periodic boundary conditions and parallelization in all spatial dimensions. We present benchmarks for distributed and shared memory parallelization. We also discuss the stability of the PGC solver.

  19. A Support Vector Learning-Based Particle Filter Scheme for Target Localization in Communication-Constrained Underwater Acoustic Sensor Networks

    PubMed Central

    Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping

    2017-01-01

    Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid “particle degeneracy” problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network. PMID:29267252

  20. A Support Vector Learning-Based Particle Filter Scheme for Target Localization in Communication-Constrained Underwater Acoustic Sensor Networks.

    PubMed

    Li, Xinbin; Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping

    2017-12-21

    Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid "particle degeneracy" problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network.

  1. Rare Event Simulation in Radiation Transport

    NASA Astrophysics Data System (ADS)

    Kollman, Craig

    This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved, even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiplied by the likelihood ratio between the true and simulated probabilities so as to keep our estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive "learning" algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give, with probability one, a sequence of estimates converging exponentially fast to the true solution. In the final chapter, an attempt to generalize this algorithm to a continuous state space is made. This involves partitioning the space into a finite number of cells. There is a tradeoff between additional computation per iteration and variance reduction per iteration that arises in determining the optimal grid size. All versions of this algorithm can be thought of as a compromise between deterministic and Monte Carlo methods, capturing advantages of both techniques.

  2. A Radiation Chemistry Code Based on the Greens Functions of the Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Wu, Honglu

    2014-01-01

    Ionizing radiation produces several radiolytic species such as.OH, e-aq, and H. when interacting with biological matter. Following their creation, radiolytic species diffuse and chemically react with biological molecules such as DNA. Despite years of research, many questions on the DNA damage by ionizing radiation remains, notably on the indirect effect, i.e. the damage resulting from the reactions of the radiolytic species with DNA. To simulate DNA damage by ionizing radiation, we are developing a step-by-step radiation chemistry code that is based on the Green's functions of the diffusion equation (GFDE), which is able to follow the trajectories of all particles and their reactions with time. In the recent years, simulations based on the GFDE have been used extensively in biochemistry, notably to simulate biochemical networks in time and space and are often used as the "gold standard" to validate diffusion-reaction theories. The exact GFDE for partially diffusion-controlled reactions is difficult to use because of its complex form. Therefore, the radial Green's function, which is much simpler, is often used. Hence, much effort has been devoted to the sampling of the radial Green's functions, for which we have developed a sampling algorithm This algorithm only yields the inter-particle distance vector length after a time step; the sampling of the deviation angle of the inter-particle vector is not taken into consideration. In this work, we show that the radial distribution is predicted by the exact radial Green's function. We also use a technique developed by Clifford et al. to generate the inter-particle vector deviation angles, knowing the inter-particle vector length before and after a time step. The results are compared with those predicted by the exact GFDE and by the analytical angular functions for free diffusion. This first step in the creation of the radiation chemistry code should help the understanding of the contribution of the indirect effect in the formation of DNA damage and double-strand breaks.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics.more » In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.« less

  4. LAMMPS strong scaling performance optimization on Blue Gene/Q

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coffman, Paul; Jiang, Wei; Romero, Nichols A.

    2014-11-12

    LAMMPS "Large-scale Atomic/Molecular Massively Parallel Simulator" is an open-source molecular dynamics package from Sandia National Laboratories. Significant performance improvements in strong-scaling and time-to-solution for this application on IBM's Blue Gene/Q have been achieved through computational optimizations of the OpenMP versions of the short-range Lennard-Jones term of the CHARMM force field and the long-range Coulombic interaction implemented with the PPPM (particle-particle-particle mesh) algorithm, enhanced by runtime parameter settings controlling thread utilization. Additionally, MPI communication performance improvements were made to the PPPM calculation by re-engineering the parallel 3D FFT to use MPICH collectives instead of point-to-point. Performance testing was done using anmore » 8.4-million atom simulation scaling up to 16 racks on the Mira system at Argonne Leadership Computing Facility (ALCF). Speedups resulting from this effort were in some cases over 2x.« less

  5. Stochastic coalescence in finite systems: an algorithm for the numerical solution of the multivariate master equation.

    NASA Astrophysics Data System (ADS)

    Alfonso, Lester; Zamora, Jose; Cruz, Pedro

    2015-04-01

    The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.

  6. Automated classification of single airborne particles from two-dimensional angle-resolved optical scattering (TAOS) patterns by non-linear filtering

    NASA Astrophysics Data System (ADS)

    Crosta, Giovanni Franco; Pan, Yong-Le; Aptowicz, Kevin B.; Casati, Caterina; Pinnick, Ronald G.; Chang, Richard K.; Videen, Gorden W.

    2013-12-01

    Measurement of two-dimensional angle-resolved optical scattering (TAOS) patterns is an attractive technique for detecting and characterizing micron-sized airborne particles. In general, the interpretation of these patterns and the retrieval of the particle refractive index, shape or size alone, are difficult problems. By reformulating the problem in statistical learning terms, a solution is proposed herewith: rather than identifying airborne particles from their scattering patterns, TAOS patterns themselves are classified through a learning machine, where feature extraction interacts with multivariate statistical analysis. Feature extraction relies on spectrum enhancement, which includes the discrete cosine FOURIER transform and non-linear operations. Multivariate statistical analysis includes computation of the principal components and supervised training, based on the maximization of a suitable figure of merit. All algorithms have been combined together to analyze TAOS patterns, organize feature vectors, design classification experiments, carry out supervised training, assign unknown patterns to classes, and fuse information from different training and recognition experiments. The algorithms have been tested on a data set with more than 3000 TAOS patterns. The parameters that control the algorithms at different stages have been allowed to vary within suitable bounds and are optimized to some extent. Classification has been targeted at discriminating aerosolized Bacillus subtilis particles, a simulant of anthrax, from atmospheric aerosol particles and interfering particles, like diesel soot. By assuming that all training and recognition patterns come from the respective reference materials only, the most satisfactory classification result corresponds to 20% false negatives from B. subtilis particles and <11% false positives from all other aerosol particles. The most effective operations have consisted of thresholding TAOS patterns in order to reject defective ones, and forming training sets from three or four pattern classes. The presented automated classification method may be adapted into a real-time operation technique, capable of detecting and characterizing micron-sized airborne particles.

  7. Convergence of the Ponderomotive Guiding Center approximation in the LWFA

    NASA Astrophysics Data System (ADS)

    Silva, Thales; Vieira, Jorge; Helm, Anton; Fonseca, Ricardo; Silva, Luis

    2017-10-01

    Plasma accelerators arose as potential candidates for future accelerator technology in the last few decades because of its predicted compactness and low cost. One of the proposed designs for plasma accelerators is based on Laser Wakefield Acceleration (LWFA). However, simulations performed for such systems have to solve the laser wavelength which is orders of magnitude lower than the plasma wavelength. In this context, the Ponderomotive Guiding Center (PGC) algorithm for particle-in-cell (PIC) simulations is a potent tool. The laser is approximated by its envelope which leads to a speed-up of around 100 times because the laser wavelength is not solved. The plasma response is well understood, and comparison with the full PIC code show an excellent agreement. However, for LWFA, the convergence of the self-injected beam parameters, such as energy and charge, was not studied before and has vital importance for the use of the algorithm in predicting the beam parameters. Our goal is to do a thorough investigation of the stability and convergence of the algorithm in situations of experimental relevance for LWFA. To this end, we perform simulations using the PGC algorithm implemented in the PIC code OSIRIS. To verify the PGC predictions, we compare the results with full PIC simulations. This project has received funding from the European Union's Horizon 2020 research and innovation programme under Grant agreement No 653782.

  8. Variance-reduced simulation of lattice discrete-time Markov chains with applications in reaction networks

    NASA Astrophysics Data System (ADS)

    Maginnis, P. A.; West, M.; Dullerud, G. E.

    2016-10-01

    We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a ;black-box;, i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sirunyan, A. M.; Tumasyan, A.; Adam, W.

    The CMS apparatus was identified, a few years before the start of the LHC operation at CERN, to feature properties well suited to particle-flow (PF) reconstruction: a highly-segmented tracker, a fine-grained electromagnetic calorimeter, a hermetic hadron calorimeter, a strong magnetic field, and an excellent muon spectrometer. A fully-fledged PF reconstruction algorithm tuned to the CMS detector was therefore developed and has been consistently used in physics analyses for the first time at a hadron collider. For each collision, the comprehensive list of final-state particles identified and reconstructed by the algorithm provides a global event description that leads to unprecedented CMSmore » performance for jet and hadronic tau decay reconstruction, missing transverse momentum determination, and electron and muon identification. This approach also allows particles from pileup interactions to be identified and enables efficient pileup mitigation methods. In conclusion, the data collected by CMS at a centre-of-mass energy of 8 TeV show excellent agreement with the simulation and confirm the superior PF performance at least up to an average of 20 pileup interactions.« less

  10. Automated Proton Track Identification in MicroBooNE Using Gradient Boosted Decision Trees

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodruff, Katherine

    MicroBooNE is a liquid argon time projection chamber (LArTPC) neutrino experiment that is currently running in the Booster Neutrino Beam at Fermilab. LArTPC technology allows for high-resolution, three-dimensional representations of neutrino interactions. A wide variety of software tools for automated reconstruction and selection of particle tracks in LArTPCs are actively being developed. Short, isolated proton tracks, the signal for low- momentum-transfer neutral current (NC) elastic events, are easily hidden in a large cosmic background. Detecting these low-energy tracks will allow us to probe interesting regions of the proton's spin structure. An effective method for selecting NC elastic events is tomore » combine a highly efficient track reconstruction algorithm to find all candidate tracks with highly accurate particle identification using a machine learning algorithm. We present our work on particle track classification using gradient tree boosting software (XGBoost) and the performance on simulated neutrino data.« less

  11. Enhancing artificial bee colony algorithm with self-adaptive searching strategy and artificial immune network operators for global optimization.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments.

  12. Evolutionary Beamforming Optimization for Radio Frequency Charging in Wireless Rechargeable Sensor Networks.

    PubMed

    Yao, Ke-Han; Jiang, Jehn-Ruey; Tsai, Chung-Hsien; Wu, Zong-Syun

    2017-08-20

    This paper investigates how to efficiently charge sensor nodes in a wireless rechargeable sensor network (WRSN) with radio frequency (RF) chargers to make the network sustainable. An RF charger is assumed to be equipped with a uniform circular array (UCA) of 12 antennas with the radius λ , where λ is the RF wavelength. The UCA can steer most RF energy in a target direction to charge a specific WRSN node by the beamforming technology. Two evolutionary algorithms (EAs) using the evolution strategy (ES), namely the Evolutionary Beamforming Optimization (EBO) algorithm and the Evolutionary Beamforming Optimization Reseeding (EBO-R) algorithm, are proposed to nearly optimize the power ratio of the UCA beamforming peak side lobe (PSL) and the main lobe (ML) aimed at the given target direction. The proposed algorithms are simulated for performance evaluation and are compared with a related algorithm, called Particle Swarm Optimization Gravitational Search Algorithm-Explore (PSOGSA-Explore), to show their superiority.

  13. Enhancing Artificial Bee Colony Algorithm with Self-Adaptive Searching Strategy and Artificial Immune Network Operators for Global Optimization

    PubMed Central

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments. PMID:24772023

  14. Generating heavy particles with energy and momentum conservation

    NASA Astrophysics Data System (ADS)

    Mereš, Michal; Melo, Ivan; Tomášik, Boris; Balek, Vladimír; Černý, Vladimír

    2011-12-01

    We propose a novel algorithm, called REGGAE, for the generation of momenta of a given sample of particle masses, evenly distributed in Lorentz-invariant phase space and obeying energy and momentum conservation. In comparison to other existing algorithms, REGGAE is designed for the use in multiparticle production in hadronic and nuclear collisions where many hadrons are produced and a large part of the available energy is stored in the form of their masses. The algorithm uses a loop simulating multiple collisions which lead to production of configurations with reasonably large weights. Program summaryProgram title: REGGAE (REscattering-after-Genbod GenerAtor of Events) Catalogue identifier: AEJR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1523 No. of bytes in distributed program, including test data, etc.: 9608 Distribution format: tar.gz Programming language: C++ Computer: PC Pentium 4, though no particular tuning for this machine was performed. Operating system: Originally designed on Linux PC with g++, but it has been compiled and ran successfully on OS X with g++ and MS Windows with Microsoft Visual C++ 2008 Express Edition, as well. RAM: This depends on the number of particles which are generated. For 10 particles like in the attached example it requires about 120 kB. Classification: 11.2 Nature of problem: The task is to generate momenta of a sample of particles with given masses which obey energy and momentum conservation. Generated samples should be evenly distributed in the available Lorentz-invariant phase space. Solution method: In general, the algorithm works in two steps. First, all momenta are generated with the GENBOD algorithm. There, particle production is modeled as a sequence of two-body decays of heavy resonances. After all momenta are generated this way, they are reshuffled. Each particle undergoes a collision with some other partner such that in the pair center of mass system the new directions of momenta are distributed isotropically. After each particle collides only a few times, the momenta are distributed evenly across the whole available phase space. Starting with GENBOD is not essential for the procedure but it improves the performance. Running time: This depends on the number of particles and number of events one wants to generate. On a LINUX PC with 2 GHz processor, generation of 1000 events with 10 particles each takes about 3 s.

  15. Improved discrete swarm intelligence algorithms for endmember extraction from hyperspectral remote sensing images

    NASA Astrophysics Data System (ADS)

    Su, Yuanchao; Sun, Xu; Gao, Lianru; Li, Jun; Zhang, Bing

    2016-10-01

    Endmember extraction is a key step in hyperspectral unmixing. A new endmember extraction framework is proposed for hyperspectral endmember extraction. The proposed approach is based on the swarm intelligence (SI) algorithm, where discretization is used to solve the SI algorithm because pixels in a hyperspectral image are naturally defined within a discrete space. Moreover, a "distance" factor is introduced into the objective function to limit the endmember numbers which is generally limited in real scenarios, while traditional SI algorithms likely produce superabundant spectral signatures, which generally belong to the same classes. Three endmember extraction methods are proposed based on the artificial bee colony, ant colony optimization, and particle swarm optimization algorithms. Experiments with both simulated and real hyperspectral images indicate that the proposed framework can improve the accuracy of endmember extraction.

  16. Predictability of the Lagrangian Motion in the Upper Ocean

    NASA Astrophysics Data System (ADS)

    Piterbarg, L. I.; Griffa, A.; Griffa, A.; Mariano, A. J.; Ozgokmen, T. M.; Ryan, E. H.

    2001-12-01

    The complex non-linear dynamics of the upper ocean leads to chaotic behavior of drifter trajectories in the ocean. Our study is focused on estimating the predictability limit for the position of an individual Lagrangian particle or a particle cluster based on the knowledge of mean currents and observations of nearby particles (predictors). The Lagrangian prediction problem, besides being a fundamental scientific problem, is also of great importance for practical applications such as search and rescue operations and for modeling the spread of fish larvae. A stochastic multi-particle model for the Lagrangian motion has been rigorously formulated and is a generalization of the well known "random flight" model for a single particle. Our model is mathematically consistent and includes a few easily interpreted parameters, such as the Lagrangian velocity decorrelation time scale, the turbulent velocity variance, and the velocity decorrelation radius, that can be estimated from data. The top Lyapunov exponent for an isotropic version of the model is explicitly expressed as a function of these parameters enabling us to approximate the predictability limit to first order. Lagrangian prediction errors for two new prediction algorithms are evaluated against simple algorithms and each other and are used to test the predictability limits of the stochastic model for isotropic turbulence. The first algorithm is based on a Kalman filter and uses the developed stochastic model. Its implementation for drifter clusters in both the Tropical Pacific and Adriatic Sea, showed good prediction skill over a period of 1-2 weeks. The prediction error is primarily a function of the data density, defined as the number of predictors within a velocity decorrelation spatial scale from the particle to be predicted. The second algorithm is model independent and is based on spatial regression considerations. Preliminary results, based on simulated, as well as, real data, indicate that it performs better than the Kalman-based algorithm in strong shear flows. An important component of our research is the optimal predictor location problem; Where should floats be launched in order to minimize the Lagrangian prediction error? Preliminary Lagrangian sampling results for different flow scenarios will be presented.

  17. Adaptive power allocation schemes based on IAFS algorithm for OFDM-based cognitive radio systems

    NASA Astrophysics Data System (ADS)

    Zhang, Shuying; Zhao, Xiaohui; Liang, Cong; Ding, Xu

    2017-01-01

    In cognitive radio (CR) systems, reasonable power allocation can increase transmission rate of CR users or secondary users (SUs) as much as possible and at the same time insure normal communication among primary users (PUs). This study proposes an optimal power allocation scheme for the OFDM-based CR system with one SU influenced by multiple PU interference constraints. This scheme is based on an improved artificial fish swarm (IAFS) algorithm in combination with the advantage of conventional artificial fish swarm (ASF) algorithm and particle swarm optimisation (PSO) algorithm. In performance comparison of IAFS algorithm with other intelligent algorithms by simulations, the superiority of the IAFS algorithm is illustrated; this superiority results in better performance of our proposed scheme than that of the power allocation algorithms proposed by the previous studies in the same scenario. Furthermore, our proposed scheme can obtain higher transmission data rate under the multiple PU interference constraints and the total power constraint of SU than that of the other mentioned works.

  18. A chaos wolf optimization algorithm with self-adaptive variable step-size

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun

    2017-10-01

    To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  19. Ensemble of hybrid genetic algorithm for two-dimensional phase unwrapping

    NASA Astrophysics Data System (ADS)

    Balakrishnan, D.; Quan, C.; Tay, C. J.

    2013-06-01

    The phase unwrapping is the final and trickiest step in any phase retrieval technique. Phase unwrapping by artificial intelligence methods (optimization algorithms) such as hybrid genetic algorithm, reverse simulated annealing, particle swarm optimization, minimum cost matching showed better results than conventional phase unwrapping methods. In this paper, Ensemble of hybrid genetic algorithm with parallel populations is proposed to solve the branch-cut phase unwrapping problem. In a single populated hybrid genetic algorithm, the selection, cross-over and mutation operators are applied to obtain new population in every generation. The parameters and choice of operators will affect the performance of the hybrid genetic algorithm. The ensemble of hybrid genetic algorithm will facilitate to have different parameters set and different choice of operators simultaneously. Each population will use different set of parameters and the offspring of each population will compete against the offspring of all other populations, which use different set of parameters. The effectiveness of proposed algorithm is demonstrated by phase unwrapping examples and advantages of the proposed method are discussed.

  20. A comparison of various algorithms to extract Magic Formula tyre model coefficients for vehicle dynamics simulations

    NASA Astrophysics Data System (ADS)

    Vijay Alagappan, A.; Narasimha Rao, K. V.; Krishna Kumar, R.

    2015-02-01

    Tyre models are a prerequisite for any vehicle dynamics simulation. Tyre models range from the simplest mathematical models that consider only the cornering stiffness to a complex set of formulae. Among all the steady-state tyre models that are in use today, the Magic Formula tyre model is unique and most popular. Though the Magic Formula tyre model is widely used, obtaining the model coefficients from either the experimental or the simulation data is not straightforward due to its nonlinear nature and the presence of a large number of coefficients. A common procedure used for this extraction is the least-squares minimisation that requires considerable experience for initial guesses. Various researchers have tried different algorithms, namely, gradient and Newton-based methods, differential evolution, artificial neural networks, etc. The issues involved in all these algorithms are setting bounds or constraints, sensitivity of the parameters, the features of the input data such as the number of points, noisy data, experimental procedure used such as slip angle sweep or tyre measurement (TIME) procedure, etc. The extracted Magic Formula coefficients are affected by these variants. This paper highlights the issues that are commonly encountered in obtaining these coefficients with different algorithms, namely, least-squares minimisation using trust region algorithms, Nelder-Mead simplex, pattern search, differential evolution, particle swarm optimisation, cuckoo search, etc. A key observation is that not all the algorithms give the same Magic Formula coefficients for a given data. The nature of the input data and the type of the algorithm decide the set of the Magic Formula tyre model coefficients.

  1. Smoldyn: particle-based simulation with rule-based modeling, improved molecular interaction and a library interface.

    PubMed

    Andrews, Steven S

    2017-03-01

    Smoldyn is a spatial and stochastic biochemical simulator. It treats each molecule of interest as an individual particle in continuous space, simulating molecular diffusion, molecule-membrane interactions and chemical reactions, all with good accuracy. This article presents several new features. Smoldyn now supports two types of rule-based modeling. These are a wildcard method, which is very convenient, and the BioNetGen package with extensions for spatial simulation, which is better for complicated models. Smoldyn also includes new algorithms for simulating the diffusion of surface-bound molecules and molecules with excluded volume. Both are exact in the limit of short time steps and reasonably good with longer steps. In addition, Smoldyn supports single-molecule tracking simulations. Finally, the Smoldyn source code can be accessed through a C/C ++ language library interface. Smoldyn software, documentation, code, and examples are at http://www.smoldyn.org . steven.s.andrews@gmail.com. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  2. A discrete element and ray framework for rapid simulation of acoustical dispersion of microscale particulate agglomerations

    NASA Astrophysics Data System (ADS)

    Zohdi, T. I.

    2016-03-01

    In industry, particle-laden fluids, such as particle-functionalized inks, are constructed by adding fine-scale particles to a liquid solution, in order to achieve desired overall properties in both liquid and (cured) solid states. However, oftentimes undesirable particulate agglomerations arise due to some form of mutual-attraction stemming from near-field forces, stray electrostatic charges, process ionization and mechanical adhesion. For proper operation of industrial processes involving particle-laden fluids, it is important to carefully breakup and disperse these agglomerations. One approach is to target high-frequency acoustical pressure-pulses to breakup such agglomerations. The objective of this paper is to develop a computational model and corresponding solution algorithm to enable rapid simulation of the effect of acoustical pulses on an agglomeration composed of a collection of discrete particles. Because of the complex agglomeration microstructure, containing gaps and interfaces, this type of system is extremely difficult to mesh and simulate using continuum-based methods, such as the finite difference time domain or the finite element method. Accordingly, a computationally-amenable discrete element/discrete ray model is developed which captures the primary physical events in this process, such as the reflection and absorption of acoustical energy, and the induced forces on the particulate microstructure. The approach utilizes a staggered, iterative solution scheme to calculate the power transfer from the acoustical pulse to the particles and the subsequent changes (breakup) of the pulse due to the particles. Three-dimensional examples are provided to illustrate the approach.

  3. Fortran interface layer of the framework for developing particle simulator FDPS

    NASA Astrophysics Data System (ADS)

    Namekata, Daisuke; Iwasawa, Masaki; Nitadori, Keigo; Tanikawa, Ataru; Muranushi, Takayuki; Wang, Long; Hosono, Natsuki; Nomura, Kentaro; Makino, Junichiro

    2018-06-01

    Numerical simulations based on particle methods have been widely used in various fields including astrophysics. To date, various versions of simulation software have been developed by individual researchers or research groups in each field, through a huge amount of time and effort, even though the numerical algorithms used are very similar. To improve the situation, we have developed a framework, called FDPS (Framework for Developing Particle Simulators), which enables researchers to develop massively parallel particle simulation codes for arbitrary particle methods easily. Until version 3.0, FDPS provided an API (application programming interface) for the C++ programming language only. This limitation comes from the fact that FDPS is developed using the template feature in C++, which is essential to support arbitrary data types of particle. However, there are many researchers who use Fortran to develop their codes. Thus, the previous versions of FDPS require such people to invest much time to learn C++. This is inefficient. To cope with this problem, we developed a Fortran interface layer in FDPS, which provides API for Fortran. In order to support arbitrary data types of particle in Fortran, we design the Fortran interface layer as follows. Based on a given derived data type in Fortran representing particle, a PYTHON script provided by us automatically generates a library that manipulates the C++ core part of FDPS. This library is seen as a Fortran module providing an API of FDPS from the Fortran side and uses C programs internally to interoperate Fortran with C++. In this way, we have overcome several technical issues when emulating a `template' in Fortran. Using the Fortran interface, users can develop all parts of their codes in Fortran. We show that the overhead of the Fortran interface part is sufficiently small and a code written in Fortran shows a performance practically identical to the one written in C++.

  4. Lorentz boosted frame simulation technique in Particle-in-cell methods

    NASA Astrophysics Data System (ADS)

    Yu, Peicheng

    In this dissertation, we systematically explore the use of a simulation method for modeling laser wakefield acceleration (LWFA) using the particle-in-cell (PIC) method, called the Lorentz boosted frame technique. In the lab frame the plasma length is typically four orders of magnitude larger than the laser pulse length. Using this technique, simulations are performed in a Lorentz boosted frame in which the plasma length, which is Lorentz contracted, and the laser length, which is Lorentz expanded, are now comparable. This technique has the potential to reduce the computational needs of a LWFA simulation by more than four orders of magnitude, and is useful if there is no or negligible reflection of the laser in the lab frame. To realize the potential of Lorentz boosted frame simulations for LWFA, the first obstacle to overcome is a robust and violent numerical instability, called the Numerical Cerenkov Instability (NCI), that leads to unphysical energy exchange between relativistically drifting particles and their radiation. This leads to unphysical noise that dwarfs the real physical processes. In this dissertation, we first present a theoretical analysis of this instability, and show that the NCI comes from the unphysical coupling of the electromagnetic (EM) modes and Langmuir modes (both main and aliasing) of the relativistically drifting plasma. We then discuss the methods to eliminate them. However, the use of FFTs can lead to parallel scalability issues when there are many more cells along the drifting direction than in the transverse direction(s). We then describe an algorithm that has the potential to address this issue by using a higher order finite difference operator for the derivative in the plasma drifting direction, while using the standard second order operators in the transverse direction(s). The NCI for this algorithm is analyzed, and it is shown that the NCI can be eliminated using the same strategies that were used for the hybrid FFT/Finite Difference solver. This scheme also requires a current correction and filtering which require FFTs. However, we show that in this case the FFTs can be done locally on each parallel partition. We also describe how the use of the hybrid FFT/Finite Difference or the hybrid higher order finite difference/second order finite difference methods permit combining the Lorentz boosted frame simulation technique with another "speed up" technique, called the quasi-3D algorithm, to gain unprecedented speed up for the LWFA simulations. In the quasi-3D algorithm the fields and currents are defined on an r--z PIC grid and expanded in azimuthal harmonics. The expansion is truncated with only a few modes so it has similar computational needs of a 2D r--z PIC code. We show that NCI has similar properties in r--z as in z-x slab geometry and show that the same strategies for eliminating the NCI in Cartesian geometry can be effective for the quasi-3D algorithm leading to the possibility of unprecedented speed up. We also describe a new code called UPIC-EMMA that is based on fully spectral (FFT) solver. The new code includes implementation of a moving antenna that can launch lasers in the boosted frame. We also describe how the new hybrid algorithms were implemented into OSIRIS. Examples of LWFA using the boosted frame using both UPIC-EMMA and OSIRIS are given, including the comparisons against the lab frame results. We also describe how to efficiently obtain the boosted frame simulations data that are needed to generate the transformed lab frame data, as well as how to use a moving window in the boosted frame. The NCI is also a major issue for modeling relativistic shocks with PIC algorithm. In relativistic shock simulations two counter-propagating plasmas drifting at relativistic speeds are colliding against each other. We show that the strategies for eliminating the NCI developed in this dissertation are enabling such simulations being run for much longer simulation times, which should open a path for major advances in relativistic shock research. (Abstract shortened by ProQuest.).

  5. Charged-particle emission tomography

    PubMed Central

    Ding, Yijun; Caucci, Luca; Barrett, Harrison H.

    2018-01-01

    Purpose Conventional charged-particle imaging techniques —such as autoradiography —provide only two-dimensional (2D) black ex vivo images of thin tissue slices. In order to get volumetric information, images of multiple thin slices are stacked. This process is time consuming and prone to distortions, as registration of 2D images is required. We propose a direct three-dimensional (3D) autoradiography technique, which we call charged-particle emission tomography (CPET). This 3D imaging technique enables imaging of thick tissue sections, thus increasing laboratory throughput and eliminating distortions due to registration. CPET also has the potential to enable in vivo charged-particle imaging with a window chamber or an endoscope. Methods Our approach to charged-particle emission tomography uses particle-processing detectors (PPDs) to estimate attributes of each detected particle. The attributes we estimate include location, direction of propagation, and/or the energy deposited in the detector. Estimated attributes are then fed into a reconstruction algorithm to reconstruct the 3D distribution of charged-particle-emitting radionuclides. Several setups to realize PPDs are designed. Reconstruction algorithms for CPET are developed. Results Reconstruction results from simulated data showed that a PPD enables CPET if the PPD measures more attributes than just the position from each detected particle. Experiments showed that a two-foil charged-particle detector is able to measure the position and direction of incident alpha particles. Conclusions We proposed a new volumetric imaging technique for charged-particle-emitting radionuclides, which we have called charged-particle emission tomography (CPET). We also proposed a new class of charged-particle detectors, which we have called particle-processing detectors (PPDs). When a PPD is used to measure the direction and/or energy attributes along with the position attributes, CPET is feasible. PMID:28370094

  6. Charged-particle emission tomography.

    PubMed

    Ding, Yijun; Caucci, Luca; Barrett, Harrison H

    2017-06-01

    Conventional charged-particle imaging techniques - such as autoradiography - provide only two-dimensional (2D) black ex vivo images of thin tissue slices. In order to get volumetric information, images of multiple thin slices are stacked. This process is time consuming and prone to distortions, as registration of 2D images is required. We propose a direct three-dimensional (3D) autoradiography technique, which we call charged-particle emission tomography (CPET). This 3D imaging technique enables imaging of thick tissue sections, thus increasing laboratory throughput and eliminating distortions due to registration. CPET also has the potential to enable in vivo charged-particle imaging with a window chamber or an endoscope. Our approach to charged-particle emission tomography uses particle-processing detectors (PPDs) to estimate attributes of each detected particle. The attributes we estimate include location, direction of propagation, and/or the energy deposited in the detector. Estimated attributes are then fed into a reconstruction algorithm to reconstruct the 3D distribution of charged-particle-emitting radionuclides. Several setups to realize PPDs are designed. Reconstruction algorithms for CPET are developed. Reconstruction results from simulated data showed that a PPD enables CPET if the PPD measures more attributes than just the position from each detected particle. Experiments showed that a two-foil charged-particle detector is able to measure the position and direction of incident alpha particles. We proposed a new volumetric imaging technique for charged-particle-emitting radionuclides, which we have called charged-particle emission tomography (CPET). We also proposed a new class of charged-particle detectors, which we have called particle-processing detectors (PPDs). When a PPD is used to measure the direction and/or energy attributes along with the position attributes, CPET is feasible. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  7. Two-way coupling of magnetohydrodynamic simulations with embedded particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Makwana, K. D.; Keppens, R.; Lapenta, G.

    2017-12-01

    We describe a method for coupling an embedded domain in a magnetohydrodynamic (MHD) simulation with a particle-in-cell (PIC) method. In this two-way coupling we follow the work of Daldorff et al. (2014) [19] in which the PIC domain receives its initial and boundary conditions from MHD variables (MHD to PIC coupling) while the MHD simulation is updated based on the PIC variables (PIC to MHD coupling). This method can be useful for simulating large plasma systems, where kinetic effects captured by particle-in-cell simulations are localized but affect global dynamics. We describe the numerical implementation of this coupling, its time-stepping algorithm, and its parallelization strategy, emphasizing the novel aspects of it. We test the stability and energy/momentum conservation of this method by simulating a steady-state plasma. We test the dynamics of this coupling by propagating plasma waves through the embedded PIC domain. Coupling with MHD shows satisfactory results for the fast magnetosonic wave, but significant distortion for the circularly polarized Alfvén wave. Coupling with Hall-MHD shows excellent coupling for the whistler wave. We also apply this methodology to simulate a Geospace Environmental Modeling (GEM) challenge type of reconnection with the diffusion region simulated by PIC coupled to larger scales with MHD and Hall-MHD. In both these cases we see the expected signatures of kinetic reconnection in the PIC domain, implying that this method can be used for reconnection studies.

  8. GRAPE-6A: A Single-Card GRAPE-6 for Parallel PC-GRAPE Cluster Systems

    NASA Astrophysics Data System (ADS)

    Fukushige, Toshiyuki; Makino, Junichiro; Kawai, Atsushi

    2005-12-01

    In this paper, we describe the design and performance of GRAPE-6A, a special-purpose computer for gravitational many-body simulations. It was designed to be used with a PC cluster, in which each node has one GRAPE-6A. Such a configuration is particularly cost-effective in running parallel tree algorithms. Though the use of parallel tree algorithms was possible with the original GRAPE-6 hardware, it was not very cost-effective since a single GRAPE-6 board was still too fast and too expensive. Therefore, we designed GRAPE-6A as a single PCI card to minimize the reproduction cost and to optimize the computing speed. The peak performance is 130 Gflops for one GRAPE-6A board and 3.1 Tflops for our 24 node cluster. We describe the implementation of the tree, TreePM and individual timestep algorithms on both a single GRAPE-6A system and GRAPE-6A cluster. Using the tree algorithm on our 16-node GRAPE-6A system, we can complete a collisionless simulation with 100 million particles (8000 steps) within 10 days.

  9. Scalable load balancing for massively parallel distributed Monte Carlo particle transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, M. J.; Brantley, P. S.; Joy, K. I.

    2013-07-01

    In order to run computer simulations efficiently on massively parallel computers with hundreds of thousands or millions of processors, care must be taken that the calculation is load balanced across the processors. Examining the workload of every processor leads to an unscalable algorithm, with run time at least as large as O(N), where N is the number of processors. We present a scalable load balancing algorithm, with run time 0(log(N)), that involves iterated processor-pair-wise balancing steps, ultimately leading to a globally balanced workload. We demonstrate scalability of the algorithm up to 2 million processors on the Sequoia supercomputer at Lawrencemore » Livermore National Laboratory. (authors)« less

  10. Inversion of oceanic constituents in case I and II waters with genetic programming algorithms.

    PubMed

    Chami, Malik; Robilliard, Denis

    2002-10-20

    A stochastic inverse technique based on agenetic programming (GP) algorithm was developed toinvert oceanic constituents from simulated data for case I and case II water applications. The simulations were carried out with the Ordre Successifs Ocean Atmosphere (OSOA) radiative transfer model. They include the effects of oceanic substances such as algal-related chlorophyll, nonchlorophyllous suspended matter, and dissolved organic matter. The synthetic data set also takes into account the directional effects of particles through a variation of their phase function that makes the simulated data realistic. It is shown that GP can be successfully applied to the inverse problem with acceptable stability in the presence of realistic noise in the data. GP is compared with neural network methodology for case I waters; GP exhibits similar retrieval accuracy, which is greater than for traditional techniques such as band ratio algorithms. The application of GP to real satellite data [a Sea-viewing Wide Field-of-view Sensor (SeaWiFS)] was also carried out for case I waters as a validation. Good agreement was obtained when GP results were compared with the SeaWiFS empirical algorithm. For case II waters the accuracy of GP is less than 33%, which remains satisfactory, at the present time, for remote-sensing purposes.

  11. Fast multipole methods on a cluster of GPUs for the meshless simulation of turbulence

    NASA Astrophysics Data System (ADS)

    Yokota, R.; Narumi, T.; Sakamaki, R.; Kameoka, S.; Obi, S.; Yasuoka, K.

    2009-11-01

    Recent advances in the parallelizability of fast N-body algorithms, and the programmability of graphics processing units (GPUs) have opened a new path for particle based simulations. For the simulation of turbulence, vortex methods can now be considered as an interesting alternative to finite difference and spectral methods. The present study focuses on the efficient implementation of the fast multipole method and pseudo-particle method on a cluster of NVIDIA GeForce 8800 GT GPUs, and applies this to a vortex method calculation of homogeneous isotropic turbulence. The results of the present vortex method agree quantitatively with that of the reference calculation using a spectral method. We achieved a maximum speed of 7.48 TFlops using 64 GPUs, and the cost performance was near 9.4/GFlops. The calculation of the present vortex method on 64 GPUs took 4120 s, while the spectral method on 32 CPUs took 4910 s.

  12. A moment projection method for population balance dynamics with a shrinkage term

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Shaohua; Yapp, Edward K.Y.; Akroyd, Jethro

    A new method of moments for solving the population balance equation is developed and presented. The moment projection method (MPM) is numerically simple and easy to implement and attempts to address the challenge of particle shrinkage due to processes such as oxidation, evaporation or dissolution. It directly solves the moment transport equation for the moments and tracks the number of the smallest particles using the algorithm by Blumstein and Wheeler (1973) . The performance of the new method is measured against the method of moments (MOM) and the hybrid method of moments (HMOM). The results suggest that MPM performs muchmore » better than MOM and HMOM where shrinkage is dominant. The new method predicts mean quantities which are almost as accurate as a high-precision stochastic method calculated using the established direct simulation algorithm (DSA).« less

  13. Dragonfly: an implementation of the expand-maximize-compress algorithm for single-particle imaging.

    PubMed

    Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N Duane

    2016-08-01

    Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand-maximize-compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA.

  14. A floor-map-aided WiFi/pseudo-odometry integration algorithm for an indoor positioning system.

    PubMed

    Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin

    2015-03-24

    This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The "go and back" phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The "cross-wall" problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning.

  15. An axisymmetric PFEM formulation for bottle forming simulation

    NASA Astrophysics Data System (ADS)

    Ryzhakov, Pavel B.

    2017-01-01

    A numerical model for bottle forming simulation is proposed. It is based upon the Particle Finite Element Method (PFEM) and is developed for the simulation of bottles characterized by rotational symmetry. The PFEM strategy is adapted to suit the problem of interest. Axisymmetric version of the formulation is developed and a modified contact algorithm is applied. This results in a method characterized by excellent computational efficiency and volume conservation characteristics. The model is validated. An example modelling the final blow process is solved. Bottle wall thickness is estimated and the mass conservation of the method is analysed.

  16. Monte Carlo calculation of large and small-angle electron scattering in air

    NASA Astrophysics Data System (ADS)

    Cohen, B. I.; Higginson, D. P.; Eng, C. D.; Farmer, W. A.; Friedman, A.; Grote, D. P.; Larson, D. J.

    2017-11-01

    A Monte Carlo method for angle scattering of electrons in air that accommodates the small-angle multiple scattering and larger-angle single scattering limits is introduced. The algorithm is designed for use in a particle-in-cell simulation of electron transport and electromagnetic wave effects in air. The method is illustrated in example calculations.

  17. Simulation of Space-borne Radar Observation from High Resolution Cloud Model - for GPM Dual frequency Precipitation Radar -

    NASA Astrophysics Data System (ADS)

    Kim, H.; Meneghini, R.; Jones, J.; Liao, L.

    2011-12-01

    A comprehensive space-borne radar simulator has been developed to support active microwave sensor satellite missions. The two major objectives of this study are: 1) to develop a radar simulator optimized for the Dual-frequency Precipitation Radar (KuPR and KaPR) on the Global Precipitation Measurement Mission satellite (GPM-DPR) and 2) to generate the synthetic test datasets for DPR algorithm development. This simulator consists of two modules: a DPR scanning configuration module and a forward module that generates atmospheric and surface radar observations. To generate realistic DPR test data, the scanning configuration module specifies the technical characteristics of DPR sensor and emulates the scanning geometry of the DPR with a inner swath of about 120 km, which contains matched-beam data from both frequencies, and an outer swath from 120 to 245 km over which only Ku-band data will be acquired. The second module is a forward model used to compute radar observables (reflectivity, attenuation and polarimetric variables) from input model variables including temperature, pressure and water content (rain water, cloud water, cloud ice, snow, graupel and water vapor) over the radar resolution volume. Presently, the input data to the simulator come from the Goddard Cumulus Ensemble (GCE) and Weather Research and Forecast (WRF) models where a constant mass density is assumed for each species with a particle size distribution given by an exponential distribution with fixed intercept parameter (N0) and a slope parameter (Λ) determined from the equivalent water content. Although the model data do not presently contain mixed phase hydrometeors, the Yokoyama-Tanaka melting model is used along with the Bruggeman effective dielectric constant to replace rain and snow particles, where both are present, with mixed phase particles while preserving the snow/water fraction. For testing one of the DPR retrieval algorithms, the Surface Reference Technique (SRT), the simulator uses the normalized radar cross sections of the surface,σ0, at each frequency and incidence angle to generate the radar return power from the surface. The simulated σ0 data are modeled as realizations from jointly Gaussian random variables with means, variances and correlations obtained from measurements of σ0 from the JPL APR2 (2nd generation Airborne Precipitation Radar) data, which operates at approximately the same frequencies as the DPR. We will discuss the general capabilities of the radar simulator, present some sample results and show how they can be used to assess the performance of the radar retrieval algorithms proposed for the Dual-Frequency GPM radar. In addition, we will report on updates to the simulator using inputs from cloud models with spectral bin microphysics.

  18. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine

    PubMed Central

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir

    2017-01-01

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems. PMID:28422080

  19. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine.

    PubMed

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2017-04-19

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.

  20. Staggered Mesh Ewald: An extension of the Smooth Particle-Mesh Ewald method adding great versatility

    PubMed Central

    Cerutti, David S.; Duke, Robert E.; Darden, Thomas A.; Lybrand, Terry P.

    2009-01-01

    We draw on an old technique for improving the accuracy of mesh-based field calculations to extend the popular Smooth Particle Mesh Ewald (SPME) algorithm as the Staggered Mesh Ewald (StME) algorithm. StME improves the accuracy of computed forces by up to 1.2 orders of magnitude and also reduces the drift in system momentum inherent in the SPME method by averaging the results of two separate reciprocal space calculations. StME can use charge mesh spacings roughly 1.5× larger than SPME to obtain comparable levels of accuracy; the one mesh in an SPME calculation can therefore be replaced with two separate meshes, each less than one third of the original size. Coarsening the charge mesh can be balanced with reductions in the direct space cutoff to optimize performance: the efficiency of StME rivals or exceeds that of SPME calculations with similarly optimized parameters. StME may also offer advantages for parallel molecular dynamics simulations because it permits the use of coarser meshes without requiring higher orders of charge interpolation and also because the two reciprocal space calculations can be run independently if that is most suitable for the machine architecture. We are planning other improvements to the standard SPME algorithm, and anticipate that StME will work synergistically will all of them to dramatically improve the efficiency and parallel scaling of molecular simulations. PMID:20174456

  1. A particle finite element method for machining simulations

    NASA Astrophysics Data System (ADS)

    Sabel, Matthias; Sator, Christian; Müller, Ralf

    2014-07-01

    The particle finite element method (PFEM) appears to be a convenient technique for machining simulations, since the geometry and topology of the problem can undergo severe changes. In this work, a short outline of the PFEM-algorithm is given, which is followed by a detailed description of the involved operations. The -shape method, which is used to track the topology, is explained and tested by a simple example. Also the kinematics and a suitable finite element formulation are introduced. To validate the method simple settings without topological changes are considered and compared to the standard finite element method for large deformations. To examine the performance of the method, when dealing with separating material, a tensile loading is applied to a notched plate. This investigation includes a numerical analysis of the different meshing parameters, and the numerical convergence is studied. With regard to the cutting simulation it is found that only a sufficiently large number of particles (and thus a rather fine finite element discretisation) leads to converged results of process parameters, such as the cutting force.

  2. Cost versus life cycle assessment-based environmental impact optimization of drinking water production plants.

    PubMed

    Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L

    2016-07-15

    Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Figures of Merit Software: Description, User's Guide, Installation Notes, Versions Description, and License Agreement

    NASA Technical Reports Server (NTRS)

    hoelzer, H. D.; Fourroux, K. A.; Rickman, D. L.; Schrader, C. M.

    2011-01-01

    Figures of Merit (FoMs) and the FoM software provide a method for quantitatively evaluating the quality of a regolith simulant by comparing the simulant to a reference material. FoMs may be used for comparing a simulant to actual regolith material, specification by stating the value a simulant s FoMs must attain to be suitable for a given application and comparing simulants from different vendors or production runs. FoMs may even be used to compare different simulants to each other. A single FoM is conceptually an algorithm that computes a single number for quantifying the similarity or difference of a single characteristic of a simulant material and a reference material and provides a clear measure of how well a simulant and reference material match or compare. FoMs have been constructed to lie between zero and 1, with zero indicating a poor or no match and 1 indicating a perfect match. FoMs are defined for modal composition, particle size distribution, particle shape distribution, (aspect ratio and angularity), and density. This TM covers the mathematics, use, installation, and licensing for the existing FoM code in detail.

  4. Nonlinear dynamics optimization with particle swarm and genetic algorithms for SPEAR3 emittance upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Xiaobiao; Safranek, James

    2014-09-01

    Nonlinear dynamics optimization is carried out for a low emittance upgrade lattice of SPEAR3 in order to improve its dynamic aperture and Touschek lifetime. Two multi-objective optimization algorithms, a genetic algorithm and a particle swarm algorithm, are used for this study. The performance of the two algorithms are compared. The result shows that the particle swarm algorithm converges significantly faster to similar or better solutions than the genetic algorithm and it does not require seeding of good solutions in the initial population. These advantages of the particle swarm algorithm may make it more suitable for many accelerator optimization applications.

  5. Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt

    Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less

  6. Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes

    DOE PAGES

    Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; ...

    2017-11-27

    Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less

  7. A hybrid experimental-numerical technique for determining 3D velocity fields from planar 2D PIV data

    NASA Astrophysics Data System (ADS)

    Eden, A.; Sigurdson, M.; Mezić, I.; Meinhart, C. D.

    2016-09-01

    Knowledge of 3D, three component velocity fields is central to the understanding and development of effective microfluidic devices for lab-on-chip mixing applications. In this paper we present a hybrid experimental-numerical method for the generation of 3D flow information from 2D particle image velocimetry (PIV) experimental data and finite element simulations of an alternating current electrothermal (ACET) micromixer. A numerical least-squares optimization algorithm is applied to a theory-based 3D multiphysics simulation in conjunction with 2D PIV data to generate an improved estimation of the steady state velocity field. This 3D velocity field can be used to assess mixing phenomena more accurately than would be possible through simulation alone. Our technique can also be used to estimate uncertain quantities in experimental situations by fitting the gathered field data to a simulated physical model. The optimization algorithm reduced the root-mean-squared difference between the experimental and simulated velocity fields in the target region by more than a factor of 4, resulting in an average error less than 12% of the average velocity magnitude.

  8. Quantum Monte Carlo Simulation of Frustrated Kondo Lattice Models

    NASA Astrophysics Data System (ADS)

    Sato, Toshihiro; Assaad, Fakher F.; Grover, Tarun

    2018-03-01

    The absence of the negative sign problem in quantum Monte Carlo simulations of spin and fermion systems has different origins. World-line based algorithms for spins require positivity of matrix elements whereas auxiliary field approaches for fermions depend on symmetries such as particle-hole symmetry. For negative-sign-free spin and fermionic systems, we show that one can formulate a negative-sign-free auxiliary field quantum Monte Carlo algorithm that allows Kondo coupling of fermions with the spins. Using this general approach, we study a half-filled Kondo lattice model on the honeycomb lattice with geometric frustration. In addition to the conventional Kondo insulator and antiferromagnetically ordered phases, we find a partial Kondo screened state where spins are selectively screened so as to alleviate frustration, and the lattice rotation symmetry is broken nematically.

  9. Designing a multistage supply chain in cross-stage reverse logistics environments: application of particle swarm optimization algorithms.

    PubMed

    Chiang, Tzu-An; Che, Z H; Cui, Zhihua

    2014-01-01

    This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA) and three particle swarm optimization (PSO) algorithms: the inertia weight method (PSOA_IWM), V(Max) method (PSOA_VMM), and constriction factor method (PSOA_CFM), which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did.

  10. Designing a Multistage Supply Chain in Cross-Stage Reverse Logistics Environments: Application of Particle Swarm Optimization Algorithms

    PubMed Central

    Chiang, Tzu-An; Che, Z. H.

    2014-01-01

    This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA) and three particle swarm optimization (PSO) algorithms: the inertia weight method (PSOA_IWM), V Max method (PSOA_VMM), and constriction factor method (PSOA_CFM), which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did. PMID:24772026

  11. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    PubMed

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  12. Computational intelligence-based optimization of maximally stable extremal region segmentation for object detection

    NASA Astrophysics Data System (ADS)

    Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.

    2017-05-01

    Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.

  13. Modeling Interactions Among Turbulence, Gas-Phase Chemistry, Soot and Radiation Using Transported PDF Methods

    NASA Astrophysics Data System (ADS)

    Haworth, Daniel

    2013-11-01

    The importance of explicitly accounting for the effects of unresolved turbulent fluctuations in Reynolds-averaged and large-eddy simulations of chemically reacting turbulent flows is increasingly recognized. Transported probability density function (PDF) methods have emerged as one of the most promising modeling approaches for this purpose. In particular, PDF methods provide an elegant and effective resolution to the closure problems that arise from averaging or filtering terms that correspond to nonlinear point processes, including chemical reaction source terms and radiative emission. PDF methods traditionally have been associated with studies of turbulence-chemistry interactions in laboratory-scale, atmospheric-pressure, nonluminous, statistically stationary nonpremixed turbulent flames; and Lagrangian particle-based Monte Carlo numerical algorithms have been the predominant method for solving modeled PDF transport equations. Recent advances and trends in PDF methods are reviewed and discussed. These include advances in particle-based algorithms, alternatives to particle-based algorithms (e.g., Eulerian field methods), treatment of combustion regimes beyond low-to-moderate-Damköhler-number nonpremixed systems (e.g., premixed flamelets), extensions to include radiation heat transfer and multiphase systems (e.g., soot and fuel sprays), and the use of PDF methods as the basis for subfilter-scale modeling in large-eddy simulation. Examples are provided that illustrate the utility and effectiveness of PDF methods for physics discovery and for applications to practical combustion systems. These include comparisons of results obtained using the PDF method with those from models that neglect unresolved turbulent fluctuations in composition and temperature in the averaged or filtered chemical source terms and/or the radiation heat transfer source terms. In this way, the effects of turbulence-chemistry-radiation interactions can be isolated and quantified.

  14. Estimation for general birth-death processes

    PubMed Central

    Crawford, Forrest W.; Minin, Vladimir N.; Suchard, Marc A.

    2013-01-01

    Birth-death processes (BDPs) are continuous-time Markov chains that track the number of “particles” in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution. PMID:25328261

  15. A three-dimensional electrostatic particle-in-cell methodology on unstructured Delaunay-Voronoi grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gatsonis, Nikolaos A.; Spirkin, Anton

    2009-06-01

    The mathematical formulation and computational implementation of a three-dimensional particle-in-cell methodology on unstructured Delaunay-Voronoi tetrahedral grids is presented. The method allows simulation of plasmas in complex domains and incorporates the duality of the Delaunay-Voronoi in all aspects of the particle-in-cell cycle. Charge assignment and field interpolation weighting schemes of zero- and first-order are formulated based on the theory of long-range constraints. Electric potential and fields are derived from a finite-volume formulation of Gauss' law using the Voronoi-Delaunay dual. Boundary conditions and the algorithms for injection, particle loading, particle motion, and particle tracking are implemented for unstructured Delaunay grids. Error andmore » sensitivity analysis examines the effects of particles/cell, grid scaling, and timestep on the numerical heating, the slowing-down time, and the deflection times. The problem of current collection by cylindrical Langmuir probes in collisionless plasmas is used for validation. Numerical results compare favorably with previous numerical and analytical solutions for a wide range of probe radius to Debye length ratios, probe potentials, and electron to ion temperature ratios. The versatility of the methodology is demonstrated with the simulation of a complex plasma microsensor, a directional micro-retarding potential analyzer that includes a low transparency micro-grid.« less

  16. Metaheuristic Optimization and its Applications in Earth Sciences

    NASA Astrophysics Data System (ADS)

    Yang, Xin-She

    2010-05-01

    A common but challenging task in modelling geophysical and geological processes is to handle massive data and to minimize certain objectives. This can essentially be considered as an optimization problem, and thus many new efficient metaheuristic optimization algorithms can be used. In this paper, we will introduce some modern metaheuristic optimization algorithms such as genetic algorithms, harmony search, firefly algorithm, particle swarm optimization and simulated annealing. We will also discuss how these algorithms can be applied to various applications in earth sciences, including nonlinear least-squares, support vector machine, Kriging, inverse finite element analysis, and data-mining. We will present a few examples to show how different problems can be reformulated as optimization. Finally, we will make some recommendations for choosing various algorithms to suit various problems. References 1) D. H. Wolpert and W. G. Macready, No free lunch theorems for optimization, IEEE Trans. Evolutionary Computation, Vol. 1, 67-82 (1997). 2) X. S. Yang, Nature-Inspired Metaheuristic Algorithms, Luniver Press, (2008). 3) X. S. Yang, Mathematical Modelling for Earth Sciences, Dunedin Academic Press, (2008).

  17. An improved grey wolf optimizer algorithm for the inversion of geoelectrical data

    NASA Astrophysics Data System (ADS)

    Li, Si-Yu; Wang, Shu-Ming; Wang, Peng-Fei; Su, Xiao-Lu; Zhang, Xin-Song; Dong, Zhi-Hui

    2018-05-01

    The grey wolf optimizer (GWO) is a novel bionics algorithm inspired by the social rank and prey-seeking behaviors of grey wolves. The GWO algorithm is easy to implement because of its basic concept, simple formula, and small number of parameters. This paper develops a GWO algorithm with a nonlinear convergence factor and an adaptive location updating strategy and applies this improved grey wolf optimizer (improved grey wolf optimizer, IGWO) algorithm to geophysical inversion problems using magnetotelluric (MT), DC resistivity and induced polarization (IP) methods. Numerical tests in MATLAB 2010b for the forward modeling data and the observed data show that the IGWO algorithm can find the global minimum and rarely sinks to the local minima. For further study, inverted results using the IGWO are contrasted with particle swarm optimization (PSO) and the simulated annealing (SA) algorithm. The outcomes of the comparison reveal that the IGWO and PSO similarly perform better in counterpoising exploration and exploitation with a given number of iterations than the SA.

  18. Gauge properties of the guiding center variational symplectic integrator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Squire, J.; Tang, W. M.; Qin, H.

    Variational symplectic algorithms have recently been developed for carrying out long-time simulation of charged particles in magnetic fields [H. Qin and X. Guan, Phys. Rev. Lett. 100, 035006 (2008); H. Qin, X. Guan, and W. Tang, Phys. Plasmas (2009); J. Li, H. Qin, Z. Pu, L. Xie, and S. Fu, Phys. Plasmas 18, 052902 (2011)]. As a direct consequence of their derivation from a discrete variational principle, these algorithms have very good long-time energy conservation, as well as exactly preserving discrete momenta. We present stability results for these algorithms, focusing on understanding how explicit variational integrators can be designed formore » this type of system. It is found that for explicit algorithms, an instability arises because the discrete symplectic structure does not become the continuous structure in the t{yields}0 limit. We examine how a generalized gauge transformation can be used to put the Lagrangian in the 'antisymmetric discretization gauge,' in which the discrete symplectic structure has the correct form, thus eliminating the numerical instability. Finally, it is noted that the variational guiding center algorithms are not electromagnetically gauge invariant. By designing a model discrete Lagrangian, we show that the algorithms are approximately gauge invariant as long as A and {phi} are relatively smooth. A gauge invariant discrete Lagrangian is very important in a variational particle-in-cell algorithm where it ensures current continuity and preservation of Gauss's law [J. Squire, H. Qin, and W. Tang (to be published)].« less

  19. A multipopulation PSO based memetic algorithm for permutation flow shop scheduling.

    PubMed

    Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang

    2013-01-01

    The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP.

  20. Multifractal analysis of multiparticle emission data in the framework of visibility graph and sandbox algorithm

    NASA Astrophysics Data System (ADS)

    Mali, P.; Manna, S. K.; Mukhopadhyay, A.; Haldar, P. K.; Singh, G.

    2018-03-01

    Multiparticle emission data in nucleus-nucleus collisions are studied in a graph theoretical approach. The sandbox algorithm used to analyze complex networks is employed to characterize the multifractal properties of the visibility graphs associated with the pseudorapidity distribution of charged particles produced in high-energy heavy-ion collisions. Experimental data on 28Si+Ag/Br interaction at laboratory energy Elab = 14 . 5 A GeV, and 16O+Ag/Br and 32S+Ag/Br interactions both at Elab = 200 A GeV, are used in this analysis. We observe a scale free nature of the degree distributions of the visibility and horizontal visibility graphs associated with the event-wise pseudorapidity distributions. Equivalent event samples simulated by ultra-relativistic quantum molecular dynamics, produce degree distributions that are almost identical to the respective experiment. However, the multifractal variables obtained by using sandbox algorithm for the experiment to some extent differ from the respective simulated results.

Top