A generalized weight-based particle-in-cell simulation scheme
NASA Astrophysics Data System (ADS)
Lee, W. W.; Jenkins, T. G.; Ethier, S.
2011-03-01
A generalized weight-based particle simulation scheme suitable for simulating magnetized plasmas, where the zeroth-order inhomogeneity is important, is presented. The scheme is an extension of the perturbative simulation schemes developed earlier for particle-in-cell (PIC) simulations. The new scheme is designed to simulate both the perturbed distribution ( δf) and the full distribution (full- F) within the same code. The development is based on the concept of multiscale expansion, which separates the scale lengths of the background inhomogeneity from those associated with the perturbed distributions. The potential advantage for such an arrangement is to minimize the particle noise by using δf in the linear stage of the simulation, while retaining the flexibility of a full- F capability in the fully nonlinear stage of the development when signals associated with plasma turbulence are at a much higher level than those from the intrinsic particle noise.
A splitting integration scheme for the SPH simulation of concentrated particle suspensions
NASA Astrophysics Data System (ADS)
Bian, Xin; Ellero, Marco
2014-01-01
Simulating nearly contacting solid particles in suspension is a challenging task due to the diverging behavior of short-range lubrication forces, which pose a serious time-step limitation for explicit integration schemes. This general difficulty limits severely the total duration of simulations of concentrated suspensions. Inspired by the ideas developed in [S. Litvinov, M. Ellero, X.Y. Hu, N.A. Adams, J. Comput. Phys. 229 (2010) 5457-5464] for the simulation of highly dissipative fluids, we propose in this work a splitting integration scheme for the direct simulation of solid particles suspended in a Newtonian liquid. The scheme separates the contributions of different forces acting on the solid particles. In particular, intermediate- and long-range multi-body hydrodynamic forces, which are computed from the discretization of the Navier-Stokes equations using the smoothed particle hydrodynamics (SPH) method, are taken into account using an explicit integration; for short-range lubrication forces, velocities of pairwise interacting solid particles are updated implicitly by sweeping over all the neighboring pairs iteratively, until convergence in the solution is obtained. By using the splitting integration, simulations can be run stably and efficiently up to very large solid particle concentrations. Moreover, the proposed scheme is not limited to the SPH method presented here, but can be easily applied to other simulation techniques employed for particulate suspensions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Y; Southern Medical University, Guangzhou; Tian, Z
Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source.more » After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle transport simulations.« less
NASA Astrophysics Data System (ADS)
Shaposhnikov, Dmitry S.; Rodin, Alexander V.; Medvedev, Alexander S.; Fedorova, Anna A.; Kuroda, Takeshi; Hartogh, Paul
2018-02-01
We present a new implementation of the hydrological cycle scheme into a general circulation model of the Martian atmosphere. The model includes a semi-Lagrangian transport scheme for water vapor and ice and accounts for microphysics of phase transitions between them. The hydrological scheme includes processes of saturation, nucleation, particle growth, sublimation, and sedimentation under the assumption of a variable size distribution. The scheme has been implemented into the Max Planck Institute Martian general circulation model and tested assuming monomodal and bimodal lognormal distributions of ice condensation nuclei. We present a comparison of the simulated annual variations, horizontal and vertical distributions of water vapor, and ice clouds with the available observations from instruments on board Mars orbiters. The accounting for bimodality of aerosol particle distribution improves the simulations of the annual hydrological cycle, including predicted ice clouds mass, opacity, number density, and particle radii. The increased number density and lower nucleation rates bring the simulated cloud opacities closer to observations. Simulations show a weak effect of the excess of small aerosol particles on the simulated water vapor distributions.
Noiseless Vlasov-Poisson simulations with linearly transformed particles
Pinto, Martin C.; Sonnendrucker, Eric; Friedman, Alex; ...
2014-06-25
We introduce a deterministic discrete-particle simulation approach, the Linearly-Transformed Particle-In-Cell (LTPIC) method, that employs linear deformations of the particles to reduce the noise traditionally associated with particle schemes. Formally, transforming the particles is justified by local first order expansions of the characteristic flow in phase space. In practice the method amounts of using deformation matrices within the particle shape functions; these matrices are updated via local evaluations of the forward numerical flow. Because it is necessary to periodically remap the particles on a regular grid to avoid excessively deforming their shapes, the method can be seen as a development ofmore » Denavit's Forward Semi-Lagrangian (FSL) scheme (Denavit, 1972 [8]). However, it has recently been established (Campos Pinto, 2012 [20]) that the underlying Linearly-Transformed Particle scheme converges for abstract transport problems, with no need to remap the particles; deforming the particles can thus be seen as a way to significantly lower the remapping frequency needed in the FSL schemes, and hence the associated numerical diffusion. To couple the method with electrostatic field solvers, two specific charge deposition schemes are examined, and their performance compared with that of the standard deposition method. Finally, numerical 1d1v simulations involving benchmark test cases and halo formation in an initially mismatched thermal sheet beam demonstrate some advantages of our LTPIC scheme over the classical PIC and FSL methods. Lastly, benchmarked test cases also indicate that, for numerical choices involving similar computational effort, the LTPIC method is capable of accuracy comparable to or exceeding that of state-of-the-art, high-resolution Vlasov schemes.« less
All-Particle Multiscale Computation of Hypersonic Rarefied Flow
NASA Astrophysics Data System (ADS)
Jun, E.; Burt, J. M.; Boyd, I. D.
2011-05-01
This study examines a new hybrid particle scheme used as an alternative means of multiscale flow simulation. The hybrid particle scheme employs the direct simulation Monte Carlo (DSMC) method in rarefied flow regions and the low diffusion (LD) particle method in continuum flow regions. The numerical procedures of the low diffusion particle method are implemented within an existing DSMC algorithm. The performance of the LD-DSMC approach is assessed by studying Mach 10 nitrogen flow over a sphere with a global Knudsen number of 0.002. The hybrid scheme results show good overall agreement with results from standard DSMC and CFD computation. Subcell procedures are utilized to improve computational efficiency and reduce sensitivity to DSMC cell size in the hybrid scheme. This makes it possible to perform the LD-DSMC simulation on a much coarser mesh that leads to a significant reduction in computation time.
Two moment dust and water ice in the MarsWRF GCM
NASA Astrophysics Data System (ADS)
Lee, Christopher; Richardson, Mark I.; Newman, Claire E.; Mischna, Michael A.
2016-10-01
A new two moment dust and water ice microphysics scheme has been developed for the MarsWRF General Circulation Model based on the Morrison and Gettelman (2008) scheme, and includes temperature dependent nucleation processes and energetically constrained condensation and evaporation. Dust consumed in the formation of water ice is also tracked by the model.The two moment dust scheme simulates dust particles in the Martian atmosphere using a Gamma distribution with fixed radius for lifted particles. Within the atmosphere the particle distribution is advected and sedimented within the two moment framework, obviating the requirement for lossy conversion between the continuous Gamma distribution and discritized bins found in some Mars microphysics schemes. Water ice is simulated using the same Gamma distribution and advected and sedimented in the same way. Water ice nucleation occurs heterogeneously onto dust particles with temperature dependent contact parameters (e.g. Trainer et al., 2009) and condensation and evaporation follows energetic constraints (e.g. Pruppacher and Klett, 1980; Montmessin et al., 2002) allowing water ice particles to grow in size where necessary. Dust particles are tracked within the ice cores as nucleation occurs, and dust cores advect and sediment along with their parent ice particle distributions. Radiative properties of dust and water particles are calculated as a function of the effective radius of the particles and the distribution width. The new microphysics scheme requires 5 tracers to be tracked as the moments of the dust, water ice, and ice core. All microphysical processes are simulated entirely within the two moment framework without any discretization of particle sizes.The effect of this new microphysics scheme on dust and water ice cloud distribution will be discussed and compared with observations from TES and MCS.
Improving the representation of mixed-phase cloud microphysics in the ICON-LEM
NASA Astrophysics Data System (ADS)
Tonttila, Juha; Hoose, Corinna; Milbrandt, Jason; Morrison, Hugh
2017-04-01
The representation of ice-phase cloud microphysics in ICON-LEM (the Large-Eddy Model configuration of the ICOsahedral Nonhydrostatic model) is improved by implementing the recently published Predicted Particle Properties (P3) scheme into the model. In the typical two-moment microphysical schemes, such as that previously used in ICON-LEM, ice-phase particles must be partitioned into several prescribed categories. It is inherently difficult to distinguish between categories such as graupel and hail based on just the particle size, yet this partitioning may significantly affect the simulation of convective clouds. The P3 scheme avoids the problems associated with predefined ice-phase categories that are inherent in traditional microphysics schemes by introducing the concept of "free" ice-phase categories, whereby the prognostic variables enable the prediction of a wide range of smoothly varying physical properties and hence particle types. To our knowledge, this is the first application of the P3 scheme in a large-eddy model with horizontal grid spacings on the order of 100 m. We will present results from ICON-LEM simulations with the new P3 scheme comprising idealized stratiform and convective cloud cases. We will also present real-case limited-area simulations focusing on the HOPE (HD(CP)2 Observational Prototype Experiment) intensive observation campaign. The results are compared with a matching set of simulations employing the two-moment scheme and the performance of the model is also evaluated against observations in the context of the HOPE simulations, comprising data from ground based remote sensing instruments.
NASA Astrophysics Data System (ADS)
Savin, Andrei V.; Smirnov, Petr G.
2018-05-01
Simulation of collisional dynamics of a large ensemble of monodisperse particles by the method of discrete elements is considered. Verle scheme is used for integration of the equations of motion. Non-conservativeness of the finite-difference scheme is discovered depending on the time step, which is equivalent to a pure-numerical energy source appearance in the process of collision. Compensation method for the source is proposed and tested.
Pernice, W H; Payne, F P; Gallagher, D F
2007-09-03
We present a novel numerical scheme for the simulation of the field enhancement by metal nano-particles in the time domain. The algorithm is based on a combination of the finite-difference time-domain method and the pseudo-spectral time-domain method for dispersive materials. The hybrid solver leads to an efficient subgridding algorithm that does not suffer from spurious field spikes as do FDTD schemes. Simulation of the field enhancement by gold particles shows the expected exponential field profile. The enhancement factors are computed for single particles and particle arrays. Due to the geometry conforming mesh the algorithm is stable for long integration times and thus suitable for the simulation of resonance phenomena in coupled nano-particle structures.
A conservative scheme for electromagnetic simulation of magnetized plasmas with kinetic electrons
NASA Astrophysics Data System (ADS)
Bao, J.; Lin, Z.; Lu, Z. X.
2018-02-01
A conservative scheme has been formulated and verified for gyrokinetic particle simulations of electromagnetic waves and instabilities in magnetized plasmas. An electron continuity equation derived from the drift kinetic equation is used to time advance the electron density perturbation by using the perturbed mechanical flow calculated from the parallel vector potential, and the parallel vector potential is solved by using the perturbed canonical flow from the perturbed distribution function. In gyrokinetic particle simulations using this new scheme, the shear Alfvén wave dispersion relation in the shearless slab and continuum damping in the sheared cylinder have been recovered. The new scheme overcomes the stringent requirement in the conventional perturbative simulation method that perpendicular grid size needs to be as small as electron collisionless skin depth even for the long wavelength Alfvén waves. The new scheme also avoids the problem in the conventional method that an unphysically large parallel electric field arises due to the inconsistency between electrostatic potential calculated from the perturbed density and vector potential calculated from the perturbed canonical flow. Finally, the gyrokinetic particle simulations of the Alfvén waves in sheared cylinder have superior numerical properties compared with the fluid simulations, which suffer from numerical difficulties associated with singular mode structures.
Computational plasticity algorithm for particle dynamics simulations
NASA Astrophysics Data System (ADS)
Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.
2018-01-01
The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.
NASA Astrophysics Data System (ADS)
Ku, Seung-Hoe; Hager, R.; Chang, C. S.; Chacon, L.; Chen, G.; EPSI Team
2016-10-01
The cancelation problem has been a long-standing issue for long wavelengths modes in electromagnetic gyrokinetic PIC simulations in toroidal geometry. As an attempt of resolving this issue, we implemented a fully implicit time integration scheme in the full-f, gyrokinetic PIC code XGC1. The new scheme - based on the implicit Vlasov-Darwin PIC algorithm by G. Chen and L. Chacon - can potentially resolve cancelation problem. The time advance for the field and the particle equations is space-time-centered, with particle sub-cycling. The resulting system of equations is solved by a Picard iteration solver with fixed-point accelerator. The algorithm is implemented in the parallel velocity formalism instead of the canonical parallel momentum formalism. XGC1 specializes in simulating the tokamak edge plasma with magnetic separatrix geometry. A fully implicit scheme could be a way to accurate and efficient gyrokinetic simulations. We will test if this numerical scheme overcomes the cancelation problem, and reproduces the dispersion relation of Alfven waves and tearing modes in cylindrical geometry. Funded by US DOE FES and ASCR, and computing resources provided by OLCF through ALCC.
NASA Astrophysics Data System (ADS)
Jackson, Thomas L.; Sridharan, Prashanth; Zhang, Ju; Balachandar, S.
2015-11-01
In this work we present axisymmetric numerical simulations of shock propagating in nitromethane over an aluminum particle for post-shock pressures up to 10 GPa. The numerical method is a finite-volume based solver on a Cartesian grid, which allows for multi-material interfaces and shocks. To preserve particle mass and volume, a novel constraint reinitialization scheme is introduced. We compute the unsteady drag coefficient as a function of post-shock pressure, and show that when normalized by post-shock conditions, the maximum drag coefficient decreases with increasing post-shock pressure. Using this information, we also present a simplified point-particle force model that can be used for mesoscale simulations.
A collision scheme for hybrid fluid-particle simulation of plasmas
NASA Astrophysics Data System (ADS)
Nguyen, Christine; Lim, Chul-Hyun; Verboncoeur, John
2006-10-01
Desorption phenomena at the wall of a tokamak can lead to the introduction of impurities at the edge of a thermonuclear plasma. In particular, the use of carbon as a constituent of the tokamak wall, as planned for ITER, requires the study of carbon and hydrocarbon transport in the plasma, including understanding of collisional interaction with the plasma. These collisions can result in new hydrocarbons, hydrogen, secondary electrons and so on. Computational modeling is a primary tool for studying these phenomena. XOOPIC [1] and OOPD1 are widely used computer modeling tools for the simulation of plasmas. Both are particle type codes. Particle simulation gives more kinetic information than fluid simulation, but more computation time is required. In order to reduce this disadvantage, hybrid simulation has been developed, and applied to the modeling of collisions. Present particle simulation tools such as XOOPIC and OODP1 employ a Monte Carlo model for the collisions between particle species and a neutral background gas defined by its temperature and pressure. In fluid-particle hybrid plasma models, collisions include combinations of particle and fluid interactions categorized by projectile-target pairing: particle-particle, particle-fluid, and fluid-fluid. For verification of this hybrid collision scheme, we compare simulation results to analytic solutions for classical plasma models. [1] Verboncoeur et al. Comput. Phys. Comm. 87, 199 (1995).
Le, Tuan-Anh; Amin, Faiz Ul; Kim, Myeong Ok
2017-01-01
The blood–brain barrier (BBB) hinders drug delivery to the brain. Despite various efforts to develop preprogramed actuation schemes for magnetic drug delivery, the unmodeled aggregation phenomenon limits drug delivery performance. This paper proposes a novel scheme with an aggregation model for a feed-forward magnetic actuation design. A simulation platform for aggregated particle delivery is developed and an actuation scheme is proposed to deliver aggregated magnetic nanoparticles (MNPs) using a discontinuous asymmetrical magnetic actuation. The experimental results with a Y-shaped channel indicated the success of the proposed scheme in steering and disaggregation. The delivery performance of the developed scheme was examined using a realistic, three-dimensional (3D) vessel simulation. Furthermore, the proposed scheme enhanced the transport and uptake of MNPs across the BBB in mice. The scheme presented here facilitates the passage of particles across the BBB to the brain using an electromagnetic actuation scheme. PMID:29271927
Hoshiar, Ali Kafash; Le, Tuan-Anh; Amin, Faiz Ul; Kim, Myeong Ok; Yoon, Jungwon
2017-12-22
The blood-brain barrier (BBB) hinders drug delivery to the brain. Despite various efforts to develop preprogramed actuation schemes for magnetic drug delivery, the unmodeled aggregation phenomenon limits drug delivery performance. This paper proposes a novel scheme with an aggregation model for a feed-forward magnetic actuation design. A simulation platform for aggregated particle delivery is developed and an actuation scheme is proposed to deliver aggregated magnetic nanoparticles (MNPs) using a discontinuous asymmetrical magnetic actuation. The experimental results with a Y-shaped channel indicated the success of the proposed scheme in steering and disaggregation. The delivery performance of the developed scheme was examined using a realistic, three-dimensional (3D) vessel simulation. Furthermore, the proposed scheme enhanced the transport and uptake of MNPs across the BBB in mice. The scheme presented here facilitates the passage of particles across the BBB to the brain using an electromagnetic actuation scheme.
The constant displacement scheme for tracking particles in heterogeneous aquifers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wen, X.H.; Gomez-Hernandez, J.J.
1996-01-01
Simulation of mass transport by particle tracking or random walk in highly heterogeneous media may be inefficient from a computational point of view if the traditional constant time step scheme is used. A new scheme which adjusts automatically the time step for each particle according to the local pore velocity, so that each particle always travels a constant distance, is shown to be computationally faster for the same degree of accuracy than the constant time step method. Using the constant displacement scheme, transport calculations in a 2-D aquifer model, with nature log-transmissivity variance of 4, can be 8.6 times fastermore » than using the constant time step scheme.« less
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
Ku, S.; Hager, R.; Chang, C. S.; ...
2016-04-01
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ku, S.; Hager, R.; Chang, C. S.
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ku, S., E-mail: sku@pppl.gov; Hager, R.; Chang, C.S.
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. The numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
Animating Wall-Bounded Turbulent Smoke via Filament-Mesh Particle-Particle Method.
Liao, Xiangyun; Si, Weixin; Yuan, Zhiyong; Sun, Hanqiu; Qin, Jing; Wang, Qiong; Heng, Pheng-Ann; Xiangyun Liao; Weixin Si; Zhiyong Yuan; Hanqiu Sun; Jing Qin; Qiong Wang; Pheng-Ann Heng
2018-03-01
Turbulent vortices in smoke flows are crucial for a visually interesting appearance. Unfortunately, it is challenging to efficiently simulate these appealing effects in the framework of vortex filament methods. The vortex filaments in grids scheme allows to efficiently generate turbulent smoke with macroscopic vortical structures, but suffers from the projection-related dissipation, and thus the small-scale vortical structures under grid resolution are hard to capture. In addition, this scheme cannot be applied in wall-bounded turbulent smoke simulation, which requires efficiently handling smoke-obstacle interaction and creating vorticity at the obstacle boundary. To tackle above issues, we propose an effective filament-mesh particle-particle (FMPP) method for fast wall-bounded turbulent smoke simulation with ample details. The Filament-Mesh component approximates the smooth long-range interactions by splatting vortex filaments on grid, solving the Poisson problem with a fast solver, and then interpolating back to smoke particles. The Particle-Particle component introduces smoothed particle hydrodynamics (SPH) turbulence model for particles in the same grid, where interactions between particles cannot be properly captured under grid resolution. Then, we sample the surface of obstacles with boundary particles, allowing the interaction between smoke and obstacle being treated as pressure forces in SPH. Besides, the vortex formation region is defined at the back of obstacles, providing smoke particles flowing by the separation particles with a vorticity force to simulate the subsequent vortex shedding phenomenon. The proposed approach can synthesize the lost small-scale vortical structures and also achieve the smoke-obstacle interaction with vortex shedding at obstacle boundaries in a lightweight manner. The experimental results demonstrate that our FMPP method can achieve more appealing visual effects than vortex filaments in grids scheme by efficiently simulating more vivid thin turbulent features.
A ubiquitous ice size bias in simulations of tropical deep convection
NASA Astrophysics Data System (ADS)
Stanford, McKenna W.; Varble, Adam; Zipser, Ed; Strapp, J. Walter; Leroy, Delphine; Schwarzenboeck, Alfons; Potts, Rodney; Protat, Alain
2017-08-01
The High Altitude Ice Crystals - High Ice Water Content (HAIC-HIWC) joint field campaign produced aircraft retrievals of total condensed water content (TWC), hydrometeor particle size distributions (PSDs), and vertical velocity (w) in high ice water content regions of mature and decaying tropical mesoscale convective systems (MCSs). The resulting dataset is used here to explore causes of the commonly documented high bias in radar reflectivity within cloud-resolving simulations of deep convection. This bias has been linked to overly strong simulated convective updrafts lofting excessive condensate mass but is also modulated by parameterizations of hydrometeor size distributions, single particle properties, species separation, and microphysical processes. Observations are compared with three Weather Research and Forecasting model simulations of an observed MCS using different microphysics parameterizations while controlling for w, TWC, and temperature. Two popular bulk microphysics schemes (Thompson and Morrison) and one bin microphysics scheme (fast spectral bin microphysics) are compared. For temperatures between -10 and -40 °C and TWC > 1 g m-3, all microphysics schemes produce median mass diameters (MMDs) that are generally larger than observed, and the precipitating ice species that controls this size bias varies by scheme, temperature, and w. Despite a much greater number of samples, all simulations fail to reproduce observed high-TWC conditions ( > 2 g m-3) between -20 and -40 °C in which only a small fraction of condensate mass is found in relatively large particle sizes greater than 1 mm in diameter. Although more mass is distributed to large particle sizes relative to those observed across all schemes when controlling for temperature, w, and TWC, differences with observations are significantly variable between the schemes tested. As a result, this bias is hypothesized to partly result from errors in parameterized hydrometeor PSD and single particle properties, but because it is present in all schemes, it may also partly result from errors in parameterized microphysical processes present in all schemes. Because of these ubiquitous ice size biases, the frequently used microphysical parameterizations evaluated in this study inherently produce a high bias in convective reflectivity for a wide range of temperatures, vertical velocities, and TWCs.
Variance reduction for Fokker–Planck based particle Monte Carlo schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick
Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less
Large scale Brownian dynamics of confined suspensions of rigid particles
NASA Astrophysics Data System (ADS)
Sprinkle, Brennan; Balboa Usabiaga, Florencio; Patankar, Neelesh A.; Donev, Aleksandar
2017-12-01
We introduce methods for large-scale Brownian Dynamics (BD) simulation of many rigid particles of arbitrary shape suspended in a fluctuating fluid. Our method adds Brownian motion to the rigid multiblob method [F. Balboa Usabiaga et al., Commun. Appl. Math. Comput. Sci. 11(2), 217-296 (2016)] at a cost comparable to the cost of deterministic simulations. We demonstrate that we can efficiently generate deterministic and random displacements for many particles using preconditioned Krylov iterative methods, if kernel methods to efficiently compute the action of the Rotne-Prager-Yamakawa (RPY) mobility matrix and its "square" root are available for the given boundary conditions. These kernel operations can be computed with near linear scaling for periodic domains using the positively split Ewald method. Here we study particles partially confined by gravity above a no-slip bottom wall using a graphical processing unit implementation of the mobility matrix-vector product, combined with a preconditioned Lanczos iteration for generating Brownian displacements. We address a major challenge in large-scale BD simulations, capturing the stochastic drift term that arises because of the configuration-dependent mobility. Unlike the widely used Fixman midpoint scheme, our methods utilize random finite differences and do not require the solution of resistance problems or the computation of the action of the inverse square root of the RPY mobility matrix. We construct two temporal schemes which are viable for large-scale simulations, an Euler-Maruyama traction scheme and a trapezoidal slip scheme, which minimize the number of mobility problems to be solved per time step while capturing the required stochastic drift terms. We validate and compare these schemes numerically by modeling suspensions of boomerang-shaped particles sedimented near a bottom wall. Using the trapezoidal scheme, we investigate the steady-state active motion in dense suspensions of confined microrollers, whose height above the wall is set by a combination of thermal noise and active flows. We find the existence of two populations of active particles, slower ones closer to the bottom and faster ones above them, and demonstrate that our method provides quantitative accuracy even with relatively coarse resolutions of the particle geometry.
Conformal Electromagnetic Particle in Cell: A Review
Meierbachtol, Collin S.; Greenwood, Andrew D.; Verboncoeur, John P.; ...
2015-10-26
We review conformal (or body-fitted) electromagnetic particle-in-cell (EM-PIC) numerical solution schemes. Included is a chronological history of relevant particle physics algorithms often employed in these conformal simulations. We also provide brief mathematical descriptions of particle-tracking algorithms and current weighting schemes, along with a brief summary of major time-dependent electromagnetic solution methods. Several research areas are also highlighted for recommended future development of new conformal EM-PIC methods.
Py-SPHViewer: Cosmological simulations using Smoothed Particle Hydrodynamics
NASA Astrophysics Data System (ADS)
Benítez-Llambay, Alejandro
2017-12-01
Py-SPHViewer visualizes and explores N-body + Hydrodynamics simulations. The code interpolates the underlying density field (or any other property) traced by a set of particles, using the Smoothed Particle Hydrodynamics (SPH) interpolation scheme, thus producing not only beautiful but also useful scientific images. Py-SPHViewer enables the user to explore simulated volumes using different projections. Py-SPHViewer also provides a natural way to visualize (in a self-consistent fashion) gas dynamical simulations, which use the same technique to compute the interactions between particles.
Particle tagging and its implications for stellar population dynamics
NASA Astrophysics Data System (ADS)
Le Bret, Theo; Pontzen, Andrew; Cooper, Andrew P.; Frenk, Carlos; Zolotov, Adi; Brooks, Alyson M.; Governato, Fabio; Parry, Owen H.
2017-07-01
We establish a controlled comparison between the properties of galactic stellar haloes obtained with hydrodynamical simulations and with 'particle tagging'. Tagging is a fast way to obtain stellar population dynamics: instead of tracking gas and star formation, it 'paints' stars directly on to a suitably defined subset of dark matter particles in a collisionless, dark-matter-only simulation. Our study shows that 'live' particle tagging schemes, where stellar masses are painted on to the dark matter particles dynamically throughout the simulation, can generate good fits to the hydrodynamical stellar density profiles of a central Milky Way-like galaxy and its most prominent substructure. Energy diffusion processes are crucial to reshaping the distribution of stars in infalling spheroidal systems and hence the final stellar halo. We conclude that the success of any particular tagging scheme hinges on this diffusion being taken into account, and discuss the role of different subgrid feedback prescriptions in driving this diffusion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.
Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope ofmore » a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.« less
NASA Astrophysics Data System (ADS)
Stanford, M.; Varble, A.; Zipser, E. J.; Strapp, J. W.; Leroy, D.; Schwarzenboeck, A.; Korolev, A.; Potts, R.
2016-12-01
A model intercomparison study is conducted to identify biases in simulated tropical convective core microphysical properties using two popular bulk parameterization schemes (Thompson and Morrison) and the Fast Spectral Bin Microphysics (FSBM) scheme. In-situ aircraft measurements of total condensed water content (TWC) and particle size distributions are compared with output from high-resolution WRF simulations of 4 mesoscale convective system (MCS) cases during the High Altitude Ice Crystals-High Ice Water Content (HAIC-HIWC) field campaign conducted in Darwin, Australia in 2014 and Cayenne, French Guiana in 2015. Observations of TWC collected using an isokinetic evaporator probe (IKP) optimized for high IWC measurements in conjunction with particle image processing from two optical array probes aboard the Falcon-20 research aircraft were used to constrain mass-size relationships in the observational dataset. Hydrometeor mass size distributions are compared between retrievals and simulations providing insight into the well-known high bias in simulated convective radar reflectivity. For TWC > 1 g m-3 between -10 and -40°C, simulations generally produce significantly greater median mass diameters (MMDs). Observations indicate that a sharp particle size mode occurs at 300 μm for large TWC values (> 2 g m-3) regardless of temperature. All microphysics schemes fail to reproduce this feature, and relative contributions of different hydrometeor species to this size bias vary between schemes. Despite far greater sample sizes, simulations also fail to produce high TWC conditions with very little of the mass contributed by large particles for a range of temperatures, despite such conditions being observed. Considering vapor grown particles alone in comparison with observations fails to correct the bias present in all schemes. Decreasing horizontal resolution from 1 km to 333 m shifts graupel and rain size distributions to slightly smaller sizes, but increased resolution alone will clearly not eliminate model biases. Results instead indicate that biases in both hydrometeor size distribution assumptions and parameterized processes also exist and need to be addressed before cloud and precipitation properties of convective systems can be adequately predicted.
Qi, Shuanhu; Schmid, Friederike
2017-11-08
We present a multiscale hybrid particle-field scheme for the simulation of relaxation and diffusion behavior of soft condensed matter systems. It combines particle-based Brownian dynamics and field-based local dynamics in an adaptive sense such that particles can switch their level of resolution on the fly. The switching of resolution is controlled by a tuning function which can be chosen at will according to the geometry of the system. As an application, the hybrid scheme is used to study the kinetics of interfacial broadening of a polymer blend, and is validated by comparing the results to the predictions from pure Brownian dynamics and pure local dynamics calculations.
Finite time step and spatial grid effects in δf simulation of warm plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturdevant, Benjamin J., E-mail: benjamin.j.sturdevant@gmail.com; Department of Applied Mathematics, University of Colorado at Boulder, Boulder, CO 80309; Parker, Scott E.
2016-01-15
This paper introduces a technique for analyzing time integration methods used with the particle weight equations in δf method particle-in-cell (PIC) schemes. The analysis applies to the simulation of warm, uniform, periodic or infinite plasmas in the linear regime and considers the collective behavior similar to the analysis performed by Langdon for full-f PIC schemes [1,2]. We perform both a time integration analysis and spatial grid analysis for a kinetic ion, adiabatic electron model of ion acoustic waves. An implicit time integration scheme is studied in detail for δf simulations using our weight equation analysis and for full-f simulations usingmore » the method of Langdon. It is found that the δf method exhibits a CFL-like stability condition for low temperature ions, which is independent of the parameter characterizing the implicitness of the scheme. The accuracy of the real frequency and damping rate due to the discrete time and spatial schemes is also derived using a perturbative method. The theoretical analysis of numerical error presented here may be useful for the verification of simulations and for providing intuition for the design of new implicit time integration schemes for the δf method, as well as understanding differences between δf and full-f approaches to plasma simulation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xue, Lulin; Fan, Jiwen; Lebo, Zachary J.
The squall line event on May 20, 2011, during the Midlatitude Continental Convective Clouds (MC3E) field campaign has been simulated by three bin (spectral) microphysics schemes coupled into the Weather Research and Forecasting (WRF) model. Semi-idealized three-dimensional simulations driven by temperature and moisture profiles acquired by a radiosonde released in the pre-convection environment at 1200 UTC in Morris, Oklahoma show that each scheme produced a squall line with features broadly consistent with the observed storm characteristics. However, substantial differences in the details of the simulated dynamic and thermodynamic structure are evident. These differences are attributed to different algorithms and numericalmore » representations of microphysical processes, assumptions of the hydrometeor processes and properties, especially ice particle mass, density, and terminal velocity relationships with size, and the resulting interactions between the microphysics, cold pool, and dynamics. This study shows that different bin microphysics schemes, designed to be conceptually more realistic and thus arguably more accurate than bulk microphysics schemes, still simulate a wide spread of microphysical, thermodynamic, and dynamic characteristics of a squall line, qualitatively similar to the spread of squall line characteristics using various bulk schemes. Future work may focus on improving the representation of ice particle properties in bin schemes to reduce this uncertainty and using the similar assumptions for all schemes to isolate the impact of physics from numerics.« less
Applicability of effective pair potentials for active Brownian particles.
Rein, Markus; Speck, Thomas
2016-09-01
We have performed a case study investigating a recently proposed scheme to obtain an effective pair potential for active Brownian particles (Farage et al., Phys. Rev. E 91, 042310 (2015)). Applying this scheme to the Lennard-Jones potential, numerical simulations of active Brownian particles are compared to simulations of passive Brownian particles interacting by the effective pair potential. Analyzing the static pair correlations, our results indicate a limited range of activity parameters (speed and orientational correlation time) for which we obtain quantitative, or even qualitative, agreement. Moreover, we find a qualitatively different behavior for the virial pressure even for small propulsion speeds. Combining these findings we conclude that beyond linear response active particles exhibit genuine non-equilibrium properties that cannot be captured by effective pair interaction alone.
Explicit simulation of ice particle habits in a Numerical Weather Prediction Model
NASA Astrophysics Data System (ADS)
Hashino, Tempei
2007-05-01
This study developed a scheme for explicit simulation of ice particle habits in Numerical Weather Prediction (NWP) Models. The scheme is called Spectral Ice Habit Prediction System (SHIPS), and the goal is to retain growth history of ice particles in the Eulerian dynamics framework. It diagnoses characteristics of ice particles based on a series of particle property variables (PPVs) that reflect history of microphysieal processes and the transport between mass bins and air parcels in space. Therefore, categorization of ice particles typically used in bulk microphysical parameterization and traditional bin models is not necessary, so that errors that stem from the categorization can be avoided. SHIPS predicts polycrystals as well as hexagonal monocrystals based on empirically derived habit frequency and growth rate, and simulates the habit-dependent aggregation and riming processes by use of the stochastic collection equation with predicted PPVs. Idealized two dimensional simulations were performed with SHIPS in a NWP model. The predicted spatial distribution of ice particle habits and types, and evolution of particle size distributions showed good quantitative agreement with observation This comprehensive model of ice particle properties, distributions, and evolution in clouds can be used to better understand problems facing wide range of research disciplines, including microphysics processes, radiative transfer in a cloudy atmosphere, data assimilation, and weather modification.
Crystal collimator systems for high energy frontier
NASA Astrophysics Data System (ADS)
Sytov, A. I.; Tikhomirov, V. V.; Lobko, A. S.
2017-07-01
Crystalline collimators can potentially considerably improve the cleaning performance of the presently used collimator systems using amorphous collimators. A crystal-based collimation scheme which relies on the channeling particle deflection in bent crystals has been proposed and extensively studied both theoretically and experimentally. However, since the efficiency of particle capture into the channeling regime does not exceed ninety percent, this collimation scheme partly suffers from the same leakage problems as the schemes using amorphous collimators. To improve further the cleaning efficiency of the crystal-based collimation system to meet the requirements of the FCC, we suggest here a double crystal-based collimation scheme, to which the second crystal is introduced to enhance the deflection of the particles escaping the capture to the channeling regime in its first crystal. The application of the effect of multiple volume reflection in one bent crystal and of the same in a sequence of crystals is simulated and compared for different crystal numbers and materials at the energy of 50 TeV. To enhance also the efficiency of use of the first crystal of the suggested double crystal-based scheme, we propose: the method of increase of the probability of particle capture into the channeling regime at the first crystal passage by means of fabrication of a crystal cut and the method of the amplification of nonchanneled particle deflection through the multiple volume reflection in one bent crystal, accompanying the particle channeling by a skew plane. We simulate both of these methods for the 50 TeV FCC energy.
NASA Astrophysics Data System (ADS)
Stanford, McKenna W.
The High Altitude Ice Crystals - High Ice Water Content (HAIC-HIWC) field campaign produced aircraft retrievals of total condensed water content (TWC), hydrometeor particle size distributions, and vertical velocity (w) in high ice water content regions of tropical mesoscale convective systems (MCSs). These observations are used to evaluate deep convective updraft properties in high-resolution nested Weather Research and Forecasting (WRF) simulations of observed MCSs. Because simulated hydrometeor properties are highly sensitive to the parameterization of microphysics, three commonly used microphysical parameterizations are tested, including two bulk schemes (Thompson and Morrison) and one bin scheme (Fast Spectral Bin Microphysics). A commonly documented bias in cloud-resolving simulations is the exaggeration of simulated radar reflectivities aloft in tropical MCSs. This may result from overly strong convective updrafts that loft excessive condensate mass and from simplified approximations of hydrometeor size distributions, properties, species separation, and microphysical processes. The degree to which the reflectivity bias is a separate function of convective dynamics, condensate mass, and hydrometeor size has yet to be addressed. This research untangles these components by comparing simulated and observed relationships between w, TWC, and hydrometer size as a function of temperature. All microphysics schemes produce median mass diameters that are generally larger than observed for temperatures between -10 °C and -40 °C and TWC > 1 g m-3. Observations produce a prominent mode in the composite mass size distribution around 300 microm, but under most conditions, all schemes shift the distribution mode to larger sizes. Despite a much greater number of samples, all simulations fail to reproduce observed high TWC or high w conditions between -20 °C and -40 °C in which only a small fraction of condensate mass is found in relatively large particle sizes. Increasing model resolution and employing explicit cloud droplet nucleation decrease the size bias, but not nearly enough to reproduce observations. Because simulated particle sizes are too large across all schemes when controlling for temperature, w, and TWC, this bias is hypothesized to partly result from errors in parameterized microphysical processes in addition to overly simplified hydrometeor properties such as mass-size relationships and particle size distribution parameters.
Validation of Microphysical Schemes in a CRM Using TRMM Satellite
NASA Astrophysics Data System (ADS)
Li, X.; Tao, W.; Matsui, T.; Liu, C.; Masunaga, H.
2007-12-01
The microphysical scheme in the Goddard Cumulus Ensemble (GCE) model has been the most heavily developed component in the past decade. The cloud-resolving model now has microphysical schemes ranging from the original Lin type bulk scheme, to improved bulk schemes, to a two-moment scheme, to a detailed bin spectral scheme. Even with the most sophisticated bin scheme, many uncertainties still exist, especially in ice phase microphysics. In this study, we take advantages of the long-term TRMM observations, especially the cloud profiles observed by the precipitation radar (PR), to validate microphysical schemes in the simulations of Mesoscale Convective Systems (MCSs). Two contrasting cases, a midlatitude summertime continental MCS with leading convection and trailing stratiform region, and an oceanic MCS in tropical western Pacific are studied. The simulated cloud structures and particle sizes are fed into a forward radiative transfer model to simulate the TRMM satellite sensors, i.e., the PR, the TRMM microwave imager (TMI) and the visible and infrared scanner (VIRS). MCS cases that match the structure and strength of the simulated systems over the 10-year period are used to construct statistics of different sensors. These statistics are then compared with the synthetic satellite data obtained from the forward radiative transfer calculations. It is found that the GCE model simulates the contrasts between the continental and oceanic case reasonably well, with less ice scattering in the oceanic case comparing with the continental case. However, the simulated ice scattering signals for both PR and TMI are generally stronger than the observations, especially for the bulk scheme and at the upper levels in the stratiform region. This indicates larger, denser snow/graupel particles at these levels. Adjusting microphysical schemes in the GCE model according the observations, especially the 3D cloud structure observed by TRMM PR, result in a much better agreement.
Spring and summer contrast in new particle formation over nine forest areas in North America
Yu, F.; Luo, G.; Pryor, S. C.; ...
2015-12-18
Recent laboratory chamber studies indicate a significant role for highly oxidized low-volatility organics in new particle formation (NPF), but the actual role of these highly oxidized low-volatility organics in atmospheric NPF remains uncertain. Here, particle size distributions (PSDs) measured in nine forest areas in North America are used to characterize the occurrence and intensity of NPF and to evaluate model simulations using an empirical formulation in which formation rate is a function of the concentrations of sulfuric acid and low-volatility organics from alpha-pinene oxidation (Nucl-Org), and using an ion-mediated nucleation mechanism (excluding organics) (Nucl-IMN). On average, NPF occurred on ~more » 70 % of days during March for the four forest sites with springtime PSD measurements, while NPF occurred on only ~ 10 % of days in July for all nine forest sites. Both Nucl-Org and Nucl-IMN schemes capture the observed high frequency of NPF in spring, but the Nucl-Org scheme significantly overpredicts while the Nucl-IMN scheme slightly underpredicts NPF and particle number concentrations in summer. Statistical analyses of observed and simulated ultrafine particle number concentrations and frequency of NPF events indicate that the scheme without organics agrees better overall with observations. The two schemes predict quite different nucleation rates (including their spatial patterns), concentrations of cloud condensation nuclei, and aerosol first indirect radiative forcing in North America, highlighting the need to reduce NPF uncertainties in regional and global earth system models.« less
Sparse grid techniques for particle-in-cell schemes
NASA Astrophysics Data System (ADS)
Ricketson, L. F.; Cerfon, A. J.
2017-02-01
We propose the use of sparse grids to accelerate particle-in-cell (PIC) schemes. By using the so-called ‘combination technique’ from the sparse grids literature, we are able to dramatically increase the size of the spatial cells in multi-dimensional PIC schemes while paying only a slight penalty in grid-based error. The resulting increase in cell size allows us to reduce the statistical noise in the simulation without increasing total particle number. We present initial proof-of-principle results from test cases in two and three dimensions that demonstrate the new scheme’s efficiency, both in terms of computation time and memory usage.
New, Improved Goddard Bulk-Microphysical Schemes for Studying Precipitation Processes in WRF
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2007-01-01
An improved bulk microphysical parameterization is implemented into the Weather Research and Forecasting ()VRF) model. This bulk microphysical scheme has three different options, 2ICE (cloud ice & snow), 3ICE-graupel (cloud ice, snow & graupel) and 3ICE-hail (cloud ice, snow & hail). High-resolution model simulations are conducted to examine the impact of microphysical schemes on two different weather events (a midlatitude linear convective system and an Atlantic hurricane). The results suggest that microphysics has a major impact on the organization and precipitation processes associated with a summer midlatitude convective line system. The Goddard 3ICE scheme with a cloud ice-snow-hail configuration agreed better with observations in terms of rainfall intensity and a narrow convective line than did simulations with a cloud ice-snow-graupel or cloud ice-snow (i.e., 2ICE) configuration. This is because the 3ICE-hail scheme includes dense ice precipitating (hail) particle with very fast fall speed (over 10 in For an Atlantic hurricane case, the Goddard microphysical schemes had no significant impact on the track forecast but did affect the intensity slightly. The improved Goddard schemes are also compared with WRF's three other 3ICE bulk microphysical schemes: WSM6, Purdue-Lin and Thompson. For the summer midlatitude convective line system, all of the schemes resulted in simulated precipitation events that were elongated in the southwest-northeast direction in qualitative agreement with the observed feature. However, the Goddard 3ICE scheme with the hail option and the Thompson scheme agree better with observations in terms of rainfall intensity, expect that the Goddard scheme simulated more heavy rainfall (over 48 mm/h). For the Atlantic hurricane case, none of the schemes had a significant impact on the track forecast; however, the simulated intensity using the Purdue-Lin scheme was much stronger than the other schemes. The vertical distributions of model simulated cloud species (i.e., snow) are quite sensitive to microphysical schemes, which is an important issue for future verification against satellite retrievals. Both the Purdue-Lin and WSM6 schemes simulated very little snow compared to the other schemes for both the midlatitude convective line and hurricane cases. Sensitivity tests are performed for these two WRF schemes to identify that snow productions could be increased by increasing the snow intercept, turning off the auto-conversion from snow to graupel and reducing the transfer processes from cloud-sized particles to precipitation-sized ice.
Particle-in-cell simulation of x-ray wakefield acceleration and betatron radiation in nanotubes
Zhang, Xiaomei; Tajima, Toshiki; Farinella, Deano; ...
2016-10-18
Though wakefield acceleration in crystal channels has been previously proposed, x-ray wakefield acceleration has only recently become a realistic possibility since the invention of the single-cycled optical laser compression technique. We investigate the acceleration due to a wakefield induced by a coherent, ultrashort x-ray pulse guided by a nanoscale channel inside a solid material. By two-dimensional particle-in-cell computer simulations, we show that an acceleration gradient of TeV/cm is attainable. This is about 3 orders of magnitude stronger than that of the conventional plasma-based wakefield accelerations, which implies the possibility of an extremely compact scheme to attain ultrahigh energies. In additionmore » to particle acceleration, this scheme can also induce the emission of high energy photons at ~O(10–100) MeV. Here, our simulations confirm such high energy photon emissions, which is in contrast with that induced by the optical laser driven wakefield scheme. In addition to this, the significantly improved emittance of the energetic electrons has been discussed.« less
NASA Technical Reports Server (NTRS)
Tao, W.K.; Shi, J.J.; Braun, S.; Simpson, J.; Chen, S.S.; Lang, S.; Hong, S.Y.; Thompson, G.; Peters-Lidard, C.
2009-01-01
A Goddard bulk microphysical parameterization is implemented into the Weather Research and Forecasting (WRF) model. This bulk microphysical scheme has three different options, 2ICE (cloud ice & snow), 3ICE-graupel (cloud ice, snow & graupel) and 3ICE-hail (cloud ice, snow & hail). High-resolution model simulations are conducted to examine the impact of microphysical schemes on different weather events: a midlatitude linear convective system and an Atlantic hurricane. The results suggest that microphysics has a major impact on the organization and precipitation processes associated with a summer midlatitude convective line system. The Goddard 3ICE scheme with the cloud ice-snow-hail configuration agreed better with observations ill of rainfall intensity and having a narrow convective line than did simulations with the cloud ice-snow-graupel and cloud ice-snow (i.e., 2ICE) configurations. This is because the Goddard 3ICE-hail configuration has denser precipitating ice particles (hail) with very fast fall speeds (over 10 m/s) For an Atlantic hurricane case, the Goddard microphysical scheme (with 3ICE-hail, 3ICE-graupel and 2ICE configurations) had no significant impact on the track forecast but did affect the intensity slightly. The Goddard scheme is also compared with WRF's three other 3ICE bulk microphysical schemes: WSM6, Purdue-Lin and Thompson. For the summer midlatitude convective line system, all of the schemes resulted in simulated precipitation events that were elongated in southwest-northeast direction in qualitative agreement with the observed feature. However, the Goddard 3ICE-hail and Thompson schemes were closest to the observed rainfall intensities although the Goddard scheme simulated more heavy rainfall (over 48 mm/h). For the Atlantic hurricane case, none of the schemes had a significant impact on the track forecast; however, the simulated intensity using the Purdue-Lin scheme was much stronger than the other schemes. The vertical distributions of model-simulated cloud species (e.g., snow) are quite sensitive to the microphysical schemes, which is an issue for future verification against satellite retrievals. Both the Purdue-Lin and WSM6 schemes simulated very little snow compared to the other schemes for both the midlatitude convective line and hurricane case. Sensitivity tests with these two schemes showed that increasing the snow intercept, turning off the auto-conversion from snow to graupel, eliminating dry growth, and reducing the transfer processes from cloud-sized particles to precipitation-sized ice collectively resulted in a net increase in those schemes' snow amounts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, F.; Luo, G.; Pryor, S. C.
Recent laboratory chamber studies indicate a significant role for highly oxidized low-volatility organics in new particle formation (NPF), but the actual role of these highly oxidized low-volatility organics in atmospheric NPF remains uncertain. Here, particle size distributions (PSDs) measured in nine forest areas in North America are used to characterize the occurrence and intensity of NPF and to evaluate model simulations using an empirical formulation in which formation rate is a function of the concentrations of sulfuric acid and low-volatility organics from alpha-pinene oxidation (Nucl-Org), and using an ion-mediated nucleation mechanism (excluding organics) (Nucl-IMN). On average, NPF occurred on ~more » 70 % of days during March for the four forest sites with springtime PSD measurements, while NPF occurred on only ~ 10 % of days in July for all nine forest sites. Both Nucl-Org and Nucl-IMN schemes capture the observed high frequency of NPF in spring, but the Nucl-Org scheme significantly overpredicts while the Nucl-IMN scheme slightly underpredicts NPF and particle number concentrations in summer. Statistical analyses of observed and simulated ultrafine particle number concentrations and frequency of NPF events indicate that the scheme without organics agrees better overall with observations. The two schemes predict quite different nucleation rates (including their spatial patterns), concentrations of cloud condensation nuclei, and aerosol first indirect radiative forcing in North America, highlighting the need to reduce NPF uncertainties in regional and global earth system models.« less
Seakeeping with the semi-Lagrangian particle finite element method
NASA Astrophysics Data System (ADS)
Nadukandi, Prashanth; Servan-Camas, Borja; Becker, Pablo Agustín; Garcia-Espinosa, Julio
2017-07-01
The application of the semi-Lagrangian particle finite element method (SL-PFEM) for the seakeeping simulation of the wave adaptive modular vehicle under spray generating conditions is presented. The time integration of the Lagrangian advection is done using the explicit integration of the velocity and acceleration along the streamlines (X-IVAS). Despite the suitability of the SL-PFEM for the considered seakeeping application, small time steps were needed in the X-IVAS scheme to control the solution accuracy. A preliminary proposal to overcome this limitation of the X-IVAS scheme for seakeeping simulations is presented.
Flare particle acceleration in the interaction of twisted coronal flux ropes
NASA Astrophysics Data System (ADS)
Threlfall, J.; Hood, A. W.; Browning, P. K.
2018-03-01
Aim. The aim of this work is to investigate and characterise non-thermal particle behaviour in a three-dimensional (3D) magnetohydrodynamical (MHD) model of unstable multi-threaded flaring coronal loops. Methods: We have used a numerical scheme which solves the relativistic guiding centre approximation to study the motion of electrons and protons. The scheme uses snapshots from high resolution numerical MHD simulations of coronal loops containing two threads, where a single thread becomes unstable and (in one case) destabilises and merges with an additional thread. Results: The particle responses to the reconnection and fragmentation in MHD simulations of two loop threads are examined in detail. We illustrate the role played by uniform background resistivity and distinguish this from the role of anomalous resistivity using orbits in an MHD simulation where only one thread becomes unstable without destabilising further loop threads. We examine the (scalable) orbit energy gains and final positions recovered at different stages of a second MHD simulation wherein a secondary loop thread is destabilised by (and merges with) the first thread. We compare these results with other theoretical particle acceleration models in the context of observed energetic particle populations during solar flares.
Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.
Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann
2015-01-01
Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.
Proteus: a direct forcing method in the simulations of particulate flows
NASA Astrophysics Data System (ADS)
Feng, Zhi-Gang; Michaelides, Efstathios E.
2005-01-01
A new and efficient direct numerical method for the simulation of particulate flows is introduced. The method combines desired elements of the immersed boundary method, the direct forcing method and the lattice Boltzmann method. Adding a forcing term in the momentum equation enforces the no-slip condition on the boundary of a moving particle. By applying the direct forcing scheme, Proteus eliminates the need for the determination of free parameters, such as the stiffness coefficient in the penalty scheme or the two relaxation parameters in the adaptive-forcing scheme. The method presents a significant improvement over the previously introduced immersed-boundary-lattice-Boltzmann method (IB-LBM) where the forcing term was computed using a penalty method and a user-defined parameter. The method allows the enforcement of the rigid body motion of a particle in a more efficient way. Compared to the "bounce-back" scheme used in the conventional LBM, the direct-forcing method provides a smoother computational boundary for particles and is capable of achieving results at higher Reynolds number flows. By using a set of Lagrangian points to track the boundary of a particle, Proteus eliminates any need for the determination of the boundary nodes that are prescribed by the "bounce-back" scheme at every time step. It also makes computations for particles of irregular shapes simpler and more efficient. Proteus has been developed in two- as well as three-dimensions. This new method has been validated by comparing its results with those from experimental measurements for a single sphere settling in an enclosure under gravity. As a demonstration of the efficiency and capabilities of the present method, the settling of a large number (1232) of spherical particles is simulated in a narrow box under two different boundary conditions. It is found that when the no-slip boundary condition is imposed at the front and rear sides of the box the particles motion is significantly hindered. Under the periodic boundary conditions, the particles move faster. The simulations show that the sedimentation characteristics in a box with periodic boundary conditions at the two sides are very close to those found in the sedimentation of two-dimensional circular particles. In the Greek mythology Proteus is a hero, the son of Poseidon. In addition to his ability to change shapes and take different forms at will, Zeus granted him the power to make correct predictions for the future. One cannot expect better attributes from a numerical code.
ITG-TEM turbulence simulation with bounce-averaged kinetic electrons in tokamak geometry
NASA Astrophysics Data System (ADS)
Kwon, Jae-Min; Qi, Lei; Yi, S.; Hahm, T. S.
2017-06-01
We develop a novel numerical scheme to simulate electrostatic turbulence with kinetic electron responses in magnetically confined toroidal plasmas. Focusing on ion gyro-radius scale turbulences with slower frequencies than the time scales for electron parallel motions, we employ and adapt the bounce-averaged kinetic equation to model trapped electrons for nonlinear turbulence simulation with Coulomb collisions. Ions are modeled by employing the gyrokinetic equation. The newly developed scheme is implemented on a global δf particle in cell code gKPSP. By performing linear and nonlinear simulations, it is demonstrated that the new scheme can reproduce key physical properties of Ion Temperature Gradient (ITG) and Trapped Electron Mode (TEM) instabilities, and resulting turbulent transport. The overall computational cost of kinetic electrons using this novel scheme is limited to 200%-300% of the cost for simulations with adiabatic electrons. Therefore the new scheme allows us to perform kinetic simulations with trapped electrons very efficiently in magnetized plasmas.
Pressure calculation in hybrid particle-field simulations
NASA Astrophysics Data System (ADS)
Milano, Giuseppe; Kawakatsu, Toshihiro
2010-12-01
In the framework of a recently developed scheme for a hybrid particle-field simulation techniques where self-consistent field (SCF) theory and particle models (molecular dynamics) are combined [J. Chem. Phys. 130, 214106 (2009)], we developed a general formulation for the calculation of instantaneous pressure and stress tensor. The expressions have been derived from statistical mechanical definition of the pressure starting from the expression for the free energy functional in the SCF theory. An implementation of the derived formulation suitable for hybrid particle-field molecular dynamics-self-consistent field simulations is described. A series of test simulations on model systems are reported comparing the calculated pressure with those obtained from standard molecular dynamics simulations based on pair potentials.
Multidimensional, fully implicit, exactly conserving electromagnetic particle-in-cell simulations
NASA Astrophysics Data System (ADS)
Chacon, Luis
2015-09-01
We discuss a new, conservative, fully implicit 2D-3V particle-in-cell algorithm for non-radiative, electromagnetic kinetic plasma simulations, based on the Vlasov-Darwin model. Unlike earlier linearly implicit PIC schemes and standard explicit PIC schemes, fully implicit PIC algorithms are unconditionally stable and allow exact discrete energy and charge conservation. This has been demonstrated in 1D electrostatic and electromagnetic contexts. In this study, we build on these recent algorithms to develop an implicit, orbit-averaged, time-space-centered finite difference scheme for the Darwin field and particle orbit equations for multiple species in multiple dimensions. The Vlasov-Darwin model is very attractive for PIC simulations because it avoids radiative noise issues in non-radiative electromagnetic regimes. The algorithm conserves global energy, local charge, and particle canonical-momentum exactly, even with grid packing. The nonlinear iteration is effectively accelerated with a fluid preconditioner, which allows efficient use of large timesteps, O(√{mi/me}c/veT) larger than the explicit CFL. In this presentation, we will introduce the main algorithmic components of the approach, and demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 1D and 2D. Support from the LANL LDRD program and the DOE-SC ASCR office.
A novel Kinetic Monte Carlo algorithm for Non-Equilibrium Simulations
NASA Astrophysics Data System (ADS)
Jha, Prateek; Kuzovkov, Vladimir; Grzybowski, Bartosz; Olvera de La Cruz, Monica
2012-02-01
We have developed an off-lattice kinetic Monte Carlo simulation scheme for reaction-diffusion problems in soft matter systems. The definition of transition probabilities in the Monte Carlo scheme are taken identical to the transition rates in a renormalized master equation of the diffusion process and match that of the Glauber dynamics of Ising model. Our scheme provides several advantages over the Brownian dynamics technique for non-equilibrium simulations. Since particle displacements are accepted/rejected in a Monte Carlo fashion as opposed to moving particles following a stochastic equation of motion, nonphysical movements (e.g., violation of a hard core assumption) are not possible (these moves have zero acceptance). Further, the absence of a stochastic ``noise'' term resolves the computational difficulties associated with generating statistically independent trajectories with definitive mean properties. Finally, since the timestep is independent of the magnitude of the interaction forces, much longer time-steps can be employed than Brownian dynamics. We discuss the applications of this scheme for dynamic self-assembly of photo-switchable nanoparticles and dynamical problems in polymeric systems.
NASA Astrophysics Data System (ADS)
Kurilenkov, Yu K.; Tarakanov, V. P.; Gus'kov, S. Yu
2016-11-01
The neutron-free reaction of proton-boron nuclear burning accompanied with the yield of three alpha particles (p + 11B → α + 8Be* → 3α) is of great fundamental and applied interest. However, the implementation of the synthesis of p +11B requires such extreme plasma parameters that are difficult to achieve at well-known schemes of controlled thermonuclear fusion. Earlier, the yield of DD neutrons in a compact nanosecond vacuum discharge (NVD) of low energy with deuterated Pd anode have been observed. Further detailed particle-in-cell simulation by the electrodynamic code have recognized that this experiment represents the realization of rather old scheme of inertial electrostatic confinement (IEC). This IEC scheme is one of the few where the energies of ions needed for p + 11B reaction are quite possible. The purpose of this work on simulation of proton-boron reaction is studying the features of possible p + 11B burning at the IEC scheme based on NVD, thus, to look forward and planning the real experiment.
Fast Multipole Methods for Three-Dimensional N-body Problems
NASA Technical Reports Server (NTRS)
Koumoutsakos, P.
1995-01-01
We are developing computational tools for the simulations of three-dimensional flows past bodies undergoing arbitrary motions. High resolution viscous vortex methods have been developed that allow for extended simulations of two-dimensional configurations such as vortex generators. Our objective is to extend this methodology to three dimensions and develop a robust computational scheme for the simulation of such flows. A fundamental issue in the use of vortex methods is the ability of employing efficiently large numbers of computational elements to resolve the large range of scales that exist in complex flows. The traditional cost of the method scales as Omicron (N(sup 2)) as the N computational elements/particles induce velocities at each other, making the method unacceptable for simulations involving more than a few tens of thousands of particles. In the last decade fast methods have been developed that have operation counts of Omicron (N log N) or Omicron (N) (referred to as BH and GR respectively) depending on the details of the algorithm. These methods are based on the observation that the effect of a cluster of particles at a certain distance may be approximated by a finite series expansion. In order to exploit this observation we need to decompose the element population spatially into clusters of particles and build a hierarchy of clusters (a tree data structure) - smaller neighboring clusters combine to form a cluster of the next size up in the hierarchy and so on. This hierarchy of clusters allows one to determine efficiently when the approximation is valid. This algorithm is an N-body solver that appears in many fields of engineering and science. Some examples of its diverse use are in astrophysics, molecular dynamics, micro-magnetics, boundary element simulations of electromagnetic problems, and computer animation. More recently these N-body solvers have been implemented and applied in simulations involving vortex methods. Koumoutsakos and Leonard (1995) implemented the GR scheme in two dimensions for vector computer architectures allowing for simulations of bluff body flows using millions of particles. Winckelmans presented three-dimensional, viscous simulations of interacting vortex rings, using vortons and an implementation of a BH scheme for parallel computer architectures. Bhatt presented a vortex filament method to perform inviscid vortex ring interactions, with an alternative implementation of a BH scheme for a Connection Machine parallel computer architecture.
Shock interaction with deformable particles using a constrained interface reinitialization scheme
NASA Astrophysics Data System (ADS)
Sridharan, P.; Jackson, T. L.; Zhang, J.; Balachandar, S.; Thakur, S.
2016-02-01
In this paper, we present axisymmetric numerical simulations of shock propagation in nitromethane over an aluminum particle for post-shock pressures up to 10 GPa. We use the Mie-Gruneisen equation of state to describe both the medium and the particle. The numerical method is a finite-volume based solver on a Cartesian grid, that allows for multi-material interfaces and shocks, and uses a novel constrained reinitialization scheme to precisely preserve particle mass and volume. We compute the unsteady inviscid drag coefficient as a function of time, and show that when normalized by post-shock conditions, the maximum drag coefficient decreases with increasing post-shock pressure. We also compute the mass-averaged particle pressure and show that the observed oscillations inside the particle are on the particle-acoustic time scale. Finally, we present simplified point-particle models that can be used for macroscale simulations. In the Appendix, we extend the isothermal or isentropic assumption concerning the point-force models to non-ideal equations of state, thus justifying their use for the current problem.
Molecular dynamics simulations in hybrid particle-continuum schemes: Pitfalls and caveats
NASA Astrophysics Data System (ADS)
Stalter, S.; Yelash, L.; Emamy, N.; Statt, A.; Hanke, M.; Lukáčová-Medvid'ová, M.; Virnau, P.
2018-03-01
Heterogeneous multiscale methods (HMM) combine molecular accuracy of particle-based simulations with the computational efficiency of continuum descriptions to model flow in soft matter liquids. In these schemes, molecular simulations typically pose a computational bottleneck, which we investigate in detail in this study. We find that it is preferable to simulate many small systems as opposed to a few large systems, and that a choice of a simple isokinetic thermostat is typically sufficient while thermostats such as Lowe-Andersen allow for simulations at elevated viscosity. We discuss suitable choices for time steps and finite-size effects which arise in the limit of very small simulation boxes. We also argue that if colloidal systems are considered as opposed to atomistic systems, the gap between microscopic and macroscopic simulations regarding time and length scales is significantly smaller. We propose a novel reduced-order technique for the coupling to the macroscopic solver, which allows us to approximate a non-linear stress-strain relation efficiently and thus further reduce computational effort of microscopic simulations.
Ng, Tuck Wah; Neild, Adrian; Heeraman, Pascal
2008-03-15
Feasible sorters need to function rapidly and permit the input and delivery of particles continuously. Here, we describe a scheme that incorporates (i) restricted spatial input location and (ii) orthogonal sort and movement direction features. Sorting is achieved using an asymmetric potential that is cycled on and off, whereas movement is accomplished using photophoresis. Simulations with 0.2 and 0.5 microm diameter spherical particles indicate that sorting can commence quickly from a continuous stream. Procedures to optimize the sorting scheme are also described.
Observing fermionic statistics with photons in arbitrary processes
Matthews, Jonathan C. F.; Poulios, Konstantinos; Meinecke, Jasmin D. A.; Politi, Alberto; Peruzzo, Alberto; Ismail, Nur; Wörhoff, Kerstin; Thompson, Mark G.; O'Brien, Jeremy L.
2013-01-01
Quantum mechanics defines two classes of particles-bosons and fermions-whose exchange statistics fundamentally dictate quantum dynamics. Here we develop a scheme that uses entanglement to directly observe the correlated detection statistics of any number of fermions in any physical process. This approach relies on sending each of the entangled particles through identical copies of the process and by controlling a single phase parameter in the entangled state, the correlated detection statistics can be continuously tuned between bosonic and fermionic statistics. We implement this scheme via two entangled photons shared across the polarisation modes of a single photonic chip to directly mimic the fermion, boson and intermediate behaviour of two-particles undergoing a continuous time quantum walk. The ability to simulate fermions with photons is likely to have applications for verifying boson scattering and for observing particle correlations in analogue simulation using any physical platform that can prepare the entangled state prescribed here. PMID:23531788
Resolved-particle simulation by the Physalis method: Enhancements and new capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sierakowski, Adam J., E-mail: sierakowski@jhu.edu; Prosperetti, Andrea; Faculty of Science and Technology and J.M. Burgers Centre for Fluid Dynamics, University of Twente, P.O. Box 217, 7500 AE Enschede
2016-03-15
We present enhancements and new capabilities of the Physalis method for simulating disperse multiphase flows using particle-resolved simulation. The current work enhances the previous method by incorporating a new type of pressure-Poisson solver that couples with a new Physalis particle pressure boundary condition scheme and a new particle interior treatment to significantly improve overall numerical efficiency. Further, we implement a more efficient method of calculating the Physalis scalar products and incorporate short-range particle interaction models. We provide validation and benchmarking for the Physalis method against experiments of a sedimenting particle and of normal wall collisions. We conclude with an illustrativemore » simulation of 2048 particles sedimenting in a duct. In the appendix, we present a complete and self-consistent description of the analytical development and numerical methods.« less
Pushing particles in extreme fields
NASA Astrophysics Data System (ADS)
Gordon, Daniel F.; Hafizi, Bahman; Palastro, John
2017-03-01
The update of the particle momentum in an electromagnetic simulation typically employs the Boris scheme, which has the advantage that the magnetic field strictly performs no work on the particle. In an extreme field, however, it is found that onerously small time steps are required to maintain accuracy. One reason for this is that the operator splitting scheme fails. In particular, even if the electric field impulse and magnetic field rotation are computed exactly, a large error remains. The problem can be analyzed for the case of constant, but arbitrarily polarized and independent electric and magnetic fields. The error can be expressed in terms of exponentials of nested commutators of the generators of boosts and rotations. To second order in the field, the Boris scheme causes the error to vanish, but to third order in the field, there is an error that has to be controlled by decreasing the time step. This paper introduces a scheme that avoids this problem entirely, while respecting the property that magnetic fields cannot change the particle energy.
Modeling of confined turbulent fluid-particle flows using Eulerian and Lagrangian schemes
NASA Technical Reports Server (NTRS)
Adeniji-Fashola, A.; Chen, C. P.
1990-01-01
Two important aspects of fluid-particulate interaction in dilute gas-particle turbulent flows (the turbulent particle dispersion and the turbulence modulation effects) are addressed, using the Eulerian and Lagrangian modeling approaches to describe the particulate phase. Gradient-diffusion approximations are employed in the Eulerian formulation, while a stochastic procedure is utilized to simulate turbulent dispersion in the Lagrangina formulation. The k-epsilon turbulence model is used to characterize the time and length scales of the continuous phase turbulence. Models proposed for both schemes are used to predict turbulent fully-developed gas-solid vertical pipe flow with reasonable accuracy.
Hybrid particle-field molecular dynamics simulation for polyelectrolyte systems.
Zhu, You-Liang; Lu, Zhong-Yuan; Milano, Giuseppe; Shi, An-Chang; Sun, Zhao-Yan
2016-04-14
To achieve simulations on large spatial and temporal scales with high molecular chemical specificity, a hybrid particle-field method was proposed recently. This method is developed by combining molecular dynamics and self-consistent field theory (MD-SCF). The MD-SCF method has been validated by successfully predicting the experimentally observable properties of several systems. Here we propose an efficient scheme for the inclusion of electrostatic interactions in the MD-SCF framework. In this scheme, charged molecules are interacting with the external fields that are self-consistently determined from the charge densities. This method is validated by comparing the structural properties of polyelectrolytes in solution obtained from the MD-SCF and particle-based simulations. Moreover, taking PMMA-b-PEO and LiCF3SO3 as examples, the enhancement of immiscibility between the ion-dissolving block and the inert block by doping lithium salts into the copolymer is examined by using the MD-SCF method. By employing GPU-acceleration, the high performance of the MD-SCF method with explicit treatment of electrostatics facilitates the simulation study of many problems involving polyelectrolytes.
SPMHD simulations of structure formation
NASA Astrophysics Data System (ADS)
Barnes, David J.; On, Alvina Y. L.; Wu, Kinwah; Kawata, Daisuke
2018-05-01
The intracluster medium of galaxy clusters is permeated by μ {G} magnetic fields. Observations with current and future facilities have the potential to illuminate the role of these magnetic fields play in the astrophysical processes of galaxy clusters. To obtain a greater understanding of how the initial seed fields evolve to the magnetic fields in the intracluster medium requires magnetohydrodynamic simulations. We critically assess the current smoothed particle magnetohydrodynamic (SPMHD) schemes, especially highlighting the impact of a hyperbolic divergence cleaning scheme and artificial resistivity switch on the magnetic field evolution in cosmological simulations of the formation of a galaxy cluster using the N-body/SPMHD code GCMHD++. The impact and performance of the cleaning scheme and two different schemes for the artificial resistivity switch is demonstrated via idealized test cases and cosmological simulations. We demonstrate that the hyperbolic divergence cleaning scheme is effective at suppressing the growth of the numerical divergence error of the magnetic field and should be applied to any SPMHD simulation. Although the artificial resistivity is important in the strong field regime, it can suppress the growth of the magnetic field in the weak field regime, such as galaxy clusters. With sufficient resolution, simulations with divergence cleaning can reproduce observed magnetic fields. We conclude that the cleaning scheme alone is sufficient for galaxy cluster simulations, but our results indicate that the SPMHD scheme must be carefully chosen depending on the regime of the magnetic field.
Particle simulation of plasmas on the massively parallel processor
NASA Technical Reports Server (NTRS)
Gledhill, I. M. A.; Storey, L. R. O.
1987-01-01
Particle simulations, in which collective phenomena in plasmas are studied by following the self consistent motions of many discrete particles, involve several highly repetitive sets of calculations that are readily adaptable to SIMD parallel processing. A fully electromagnetic, relativistic plasma simulation for the massively parallel processor is described. The particle motions are followed in 2 1/2 dimensions on a 128 x 128 grid, with periodic boundary conditions. The two dimensional simulation space is mapped directly onto the processor network; a Fast Fourier Transform is used to solve the field equations. Particle data are stored according to an Eulerian scheme, i.e., the information associated with each particle is moved from one local memory to another as the particle moves across the spatial grid. The method is applied to the study of the nonlinear development of the whistler instability in a magnetospheric plasma model, with an anisotropic electron temperature. The wave distribution function is included as a new diagnostic to allow simulation results to be compared with satellite observations.
A direct force model for Galilean invariant lattice Boltzmann simulation of fluid-particle flows
NASA Astrophysics Data System (ADS)
Tao, Shi; He, Qing; Chen, Baiman; Yang, Xiaoping; Huang, Simin
The lattice Boltzmann method (LBM) has been widely used in the simulation of particulate flows involving complex moving boundaries. Due to the kinetic background of LBM, the bounce-back (BB) rule and the momentum exchange (ME) method can be easily applied to the solid boundary treatment and the evaluation of fluid-solid interaction force, respectively. However, recently it has been found that both the BB and ME schemes may violate the principle of Galilean invariance (GI). Some modified BB and ME methods have been proposed to reduce the GI error. But these remedies have been recognized subsequently to be inconsistent with Newton’s Third Law. Therefore, contrary to those corrections based on the BB and ME methods, a unified iterative approach is adopted to handle the solid boundary in the present study. Furthermore, a direct force (DF) scheme is proposed to evaluate the fluid-particle interaction force. The methods preserve the efficiency of the BB and ME schemes, and the performance on the accuracy and GI is verified and validated in the test cases of particulate flows with freely moving particles.
PAH concentrations simulated with the AURAMS-PAH chemical transport model over Canada and the USA
NASA Astrophysics Data System (ADS)
Galarneau, E.; Makar, P. A.; Zheng, Q.; Narayan, J.; Zhang, J.; Moran, M. D.; Bari, M. A.; Pathela, S.; Chen, A.; Chlumsky, R.
2013-07-01
The off-line Eulerian AURAMS chemical transport model was adapted to simulate the atmospheric fate of seven PAHs: phenanthrene, anthracene, fluoranthene, pyrene, benz[a]anthracene, chrysene + triphenylene, and benzo[a]pyrene. The model was then run for the year 2002 with hourly output on a~grid covering southern Canada and the continental USA with 42 km horizontal grid spacing. Model predictions were compared to ~ 5000 24 h average PAH measurements from 45 sites, eight of which also provided data on particle/gas partitioning which had been modelled using two alternative schemes. This is the first known regional modelling study for PAHs over a North American domain and the first modelling study at any scale to compare alternative particle/gas partitioning schemes against paired field measurements. Annual average modelled total (gas + particle) concentrations were statistically indistinguishable from measured values for fluoranthene, pyrene and benz[a]anthracene whereas the model underestimated concentrations of phenanthrene, anthracene and chrysene + triphenylene. Significance for benzo[a]pyrene performance was close to the statistical threshold and depended on the particle/gas partitioning scheme employed. On a day-to-day basis, the model simulated total PAH concentrations to the correct order of magnitude the majority of the time. Model performance differed substantially between measurement locations and the limited available evidence suggests that the model spatial resolution was too coarse to capture the distribution of concentrations in densely populated areas. A more detailed analysis of the factors influencing modelled particle/gas partitioning is warranted based on the findings in this study.
Gyrokinetic Particle Simulations of Neoclassical Transport
NASA Astrophysics Data System (ADS)
Lin, Zhihong
A time varying weighting (delta f) scheme based on the small gyro-radius ordering is developed and applied to a steady state, multi-species gyrokinetic particle simulation of neoclassical transport. Accurate collision operators conserving momentum and energy are developed and implemented. Benchmark simulation results using these operators are found to agree very well with neoclassical theory. For example, it is dynamically demonstrated that like-particle collisions produce no particle flux and that the neoclassical fluxes are ambipolar for an ion -electron plasma. An important physics feature of the present scheme is the introduction of toroidal flow to the simulations. In agreement with the existing analytical neoclassical theory, ion energy flux is enhanced by the toroidal mass flow and the neoclassical viscosity is a Pfirsch-Schluter factor times the classical viscosity in the banana regime. In addition, the poloidal electric field associated with toroidal mass flow is found to enhance density gradient driven electron particle flux and the bootstrap current while reducing temperature gradient driven flux and current. Modifications of the neoclassical transport by the orbit squeezing effects due to the radial electric field associated with sheared toroidal flow are studied. Simulation results indicate a reduction of both ion thermal flux and neoclassical toroidal rotation. Neoclassical theory in the steep gradient profile regime, where conventional neoclassical theory fails, is examined by taking into account finite banana width effects. The relevance of these studies to interesting experimental conditions in tokamaks is discussed. Finally, the present numerical scheme is extended to general geometry equilibrium. This new formulation will be valuable for the development of new capabilities to address complex equilibria such as advanced stellarator configurations and possibly other alternate concepts for the magnetic confinement of plasmas. In general, the present work demonstrates a valuable new capability for studying important aspects of neoclassical transport inaccessible by conventional analytical calculation processes.
NASA Astrophysics Data System (ADS)
Ching, Eric; Lv, Yu; Ihme, Matthias
2017-11-01
Recent interest in human-scale missions to Mars has sparked active research into high-fidelity simulations of reentry flows. A key feature of the Mars atmosphere is the high levels of suspended dust particles, which can not only enhance erosion of thermal protection systems but also transfer energy and momentum to the shock layer, increasing surface heat fluxes. Second-order finite-volume schemes are typically employed for hypersonic flow simulations, but such schemes suffer from a number of limitations. An attractive alternative is discontinuous Galerkin methods, which benefit from arbitrarily high spatial order of accuracy, geometric flexibility, and other advantages. As such, a Lagrangian particle method is developed in a discontinuous Galerkin framework to enable the computation of particle-laden hypersonic flows. Two-way coupling between the carrier and disperse phases is considered, and an efficient particle search algorithm compatible with unstructured curved meshes is proposed. In addition, variable thermodynamic properties are considered to accommodate high-temperature gases. The performance of the particle method is demonstrated in several test cases, with focus on the accurate prediction of particle trajectories and heating augmentation. Financial support from a Stanford Graduate Fellowship and the NASA Early Career Faculty program are gratefully acknowledged.
NASA Astrophysics Data System (ADS)
Bao, J.; Liu, D.; Lin, Z.
2017-10-01
A conservative scheme of drift kinetic electrons for gyrokinetic simulations of kinetic-magnetohydrodynamic processes in toroidal plasmas has been formulated and verified. Both vector potential and electron perturbed distribution function are decomposed into adiabatic part with analytic solution and non-adiabatic part solved numerically. The adiabatic parallel electric field is solved directly from the electron adiabatic response, resulting in a high degree of accuracy. The consistency between electrostatic potential and parallel vector potential is enforced by using the electron continuity equation. Since particles are only used to calculate the non-adiabatic response, which is used to calculate the non-adiabatic vector potential through Ohm's law, the conservative scheme minimizes the electron particle noise and mitigates the cancellation problem. Linear dispersion relations of the kinetic Alfvén wave and the collisionless tearing mode in cylindrical geometry have been verified in gyrokinetic toroidal code simulations, which show that the perpendicular grid size can be larger than the electron collisionless skin depth when the mode wavelength is longer than the electron skin depth.
Rotational Brownian Dynamics simulations of clathrin cage formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilie, Ioana M.; Briels, Wim J.; MESA+ Institute for Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede
2014-08-14
The self-assembly of nearly rigid proteins into ordered aggregates is well suited for modeling by the patchy particle approach. Patchy particles are traditionally simulated using Monte Carlo methods, to study the phase diagram, while Brownian Dynamics simulations would reveal insights into the assembly dynamics. However, Brownian Dynamics of rotating anisotropic particles gives rise to a number of complications not encountered in translational Brownian Dynamics. We thoroughly test the Rotational Brownian Dynamics scheme proposed by Naess and Elsgaeter [Macromol. Theory Simul. 13, 419 (2004); Naess and Elsgaeter Macromol. Theory Simul. 14, 300 (2005)], confirming its validity. We then apply the algorithmmore » to simulate a patchy particle model of clathrin, a three-legged protein involved in vesicle production from lipid membranes during endocytosis. Using this algorithm we recover time scales for cage assembly comparable to those from experiments. We also briefly discuss the undulatory dynamics of the polyhedral cage.« less
NASA Astrophysics Data System (ADS)
Bensiali, Bouchra; Bodi, Kowsik; Ciraolo, Guido; Ghendrih, Philippe; Liandrat, Jacques
2013-03-01
In this work, we compare different interpolation operators in the context of particle tracking with an emphasis on situations involving velocity field with steep gradients. Since, in this case, most classical methods give rise to the Gibbs phenomenon (generation of oscillations near discontinuities), we present new methods for particle tracking based on subdivision schemes and especially on the Piecewise Parabolic Harmonic (PPH) scheme which has shown its advantage in image processing in presence of strong contrasts. First an analytic univariate case with a discontinuous velocity field is considered in order to highlight the effect of the Gibbs phenomenon on trajectory calculation. Theoretical results are provided. Then, we show, regardless of the interpolation method, the need to use a conservative approach when integrating a conservative problem with a velocity field deriving from a potential. Finally, the PPH scheme is applied in a more realistic case of a time-dependent potential encountered in the edge turbulence of magnetically confined plasmas, to compare the propagation of density structures (turbulence bursts) with the dynamics of test particles. This study highlights the difference between particle transport and density transport in turbulent fields.
Simulation of Hypervelocity Impact on Aluminum-Nextel-Kevlar Orbital Debris Shields
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.
2000-01-01
An improved hybrid particle-finite element method has been developed for hypervelocity impact simulation. The method combines the general contact-impact capabilities of particle codes with the true Lagrangian kinematics of large strain finite element formulations. Unlike some alternative schemes which couple Lagrangian finite element models with smooth particle hydrodynamics, the present formulation makes no use of slidelines or penalty forces. The method has been implemented in a parallel, three dimensional computer code. Simulations of three dimensional orbital debris impact problems using this parallel hybrid particle-finite element code, show good agreement with experiment and good speedup in parallel computation. The simulations included single and multi-plate shields as well as aluminum and composite shielding materials. at an impact velocity of eleven kilometers per second.
A deterministic Lagrangian particle separation-based method for advective-diffusion problems
NASA Astrophysics Data System (ADS)
Wong, Ken T. M.; Lee, Joseph H. W.; Choi, K. W.
2008-12-01
A simple and robust Lagrangian particle scheme is proposed to solve the advective-diffusion transport problem. The scheme is based on relative diffusion concepts and simulates diffusion by regulating particle separation. This new approach generates a deterministic result and requires far less number of particles than the random walk method. For the advection process, particles are simply moved according to their velocity. The general scheme is mass conservative and is free from numerical diffusion. It can be applied to a wide variety of advective-diffusion problems, but is particularly suited for ecological and water quality modelling when definition of particle attributes (e.g., cell status for modelling algal blooms or red tides) is a necessity. The basic derivation, numerical stability and practical implementation of the NEighborhood Separation Technique (NEST) are presented. The accuracy of the method is demonstrated through a series of test cases which embrace realistic features of coastal environmental transport problems. Two field application examples on the tidal flushing of a fish farm and the dynamics of vertically migrating marine algae are also presented.
Wheeler, M J; Mason, R H; Steunenberg, K; Wagstaff, M; Chou, C; Bertram, A K
2015-05-14
Ice nucleation on mineral dust particles is known to be an important process in the atmosphere. To accurately implement ice nucleation on mineral dust particles in atmospheric simulations, a suitable theory or scheme is desirable to describe laboratory freezing data in atmospheric models. In the following, we investigated ice nucleation by supermicron mineral dust particles [kaolinite and Arizona Test Dust (ATD)] in the immersion mode. The median freezing temperature for ATD was measured to be approximately -30 °C compared with approximately -36 °C for kaolinite. The freezing results were then used to test four different schemes previously used to describe ice nucleation in atmospheric models. In terms of ability to fit the data (quantified by calculating the reduced chi-squared values), the following order was found for ATD (from best to worst): active site, pdf-α, deterministic, single-α. For kaolinite, the following order was found (from best to worst): active site, deterministic, pdf-α, single-α. The variation in the predicted median freezing temperature per decade change in the cooling rate for each of the schemes was also compared with experimental results from other studies. The deterministic model predicts the median freezing temperature to be independent of cooling rate, while experimental results show a weak dependence on cooling rate. The single-α, pdf-α, and active site schemes all agree with the experimental results within roughly a factor of 2. On the basis of our results and previous results where different schemes were tested, the active site scheme is recommended for describing the freezing of ATD and kaolinite particles. We also used our ice nucleation results to determine the ice nucleation active site (INAS) density for the supermicron dust particles tested. Using the data, we show that the INAS densities of supermicron kaolinite and ATD particles studied here are smaller than the INAS densities of submicron kaolinite and ATD particles previously reported in the literature.
Pbar Beam Stacking in the Recycler by Longitudinal Phase-space Coating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhat, C. M.
2013-08-06
Barrier rf buckets have brought about new challenges in longitudinal beam dynamics of charged particle beams in synchrotrons and at the same time led to many new remarkable prospects in beam handling. In this paper, I describe a novel beam stacking scheme for synchrotrons using barrier buckets without any emittance dilution to the beam. First I discuss the general principle of the method, called longitudinal phase-space coating. Multi-particle beam dynamics simulations of the scheme applied to the Recycler, convincingly validates the concepts and feasibility of the method. Then I demonstrate the technique experimentally in the Recycler and also use itmore » in operation. A spin-off of this scheme is its usefulness in mapping the incoherent synchrotron tune spectrum of the beam particles in barrier buckets and producing a clean hollow beam in longitudinal phase space. Both of which are described here in detail with illustrations. The beam stacking scheme presented here is the first of its kind.« less
Novel Discretization Schemes for the Numerical Simulation of Membrane Dynamics
2012-09-13
Experimental data therefore plays a key role in validation. A wide variety of methods for building a simulation that meets the listed require- ments are...Despite the intrinsic nonlinearity of true membranes, simplifying assumptions may be appropriate for some applications. Based on these possible assumptions...particles determines the kinetic energy of 15 the system. Mass lumping at the particles is intrinsic (the consistent mass treat- ment of FEM is not an
NASA Astrophysics Data System (ADS)
Chandramouli, Bharadwaj; Kamens, Richard M.
Decamethyl cyclopentasiloxane (D 5) and decamethyl tetrasiloxane (MD 2M) were injected into a smog chamber containing fine Arizona road dust particles (95% surface area <2.6 μM) and an urban smog atmosphere in the daytime. A photochemical reaction - gas-particle partitioning reaction scheme, was implemented to simulate the formation and gas-particle partitioning of hydroxyl oxidation products of D 5 and MD 2M. This scheme incorporated the reactions of D 5 and MD 2M into an existing urban smog chemical mechanism carbon bond IV and partitioned the products between gas and particle phase by treating gas-particle partitioning as a kinetic process and specifying an uptake and off-gassing rate. A photochemical model PKSS was used to simulate this set of reactions. A Langmuirian partitioning model was used to convert the measured and estimated mass-based partitioning coefficients ( KP) to a molar or volume-based form. The model simulations indicated that >99% of all product silanol formed in the gas-phase partition immediately to particle phase and the experimental data agreed with model predictions. One product, D 4TOH was observed and confirmed for the D 5 reaction and this system was modeled successfully. Experimental data was inadequate for MD 2M reaction products and it is likely that more than one product formed. The model set up a framework into which more reaction and partitioning steps can be easily added.
NASA Astrophysics Data System (ADS)
Mann, G. W.; Carslaw, K. S.; Spracklen, D. V.; Ridley, D. A.; Manktelow, P. T.; Chipperfield, M. P.; Pickering, S. J.; Johnson, C. E.
2010-05-01
A new version of the Global Model of Aerosol Processes (GLOMAP) is described, which uses a two-moment modal aerosol scheme rather than the original two-moment bin scheme. GLOMAP-mode simulates the multi-component global aerosol, resolving sulphate, sea-salt, dust, black carbon (BC) and particulate organic matter (POM), the latter including primary and biogenic secondary POM. Aerosol processes are simulated in a size-resolved manner including primary emissions, secondary particle formation by binary homogeneous nucleation of sulphuric acid and water, particle growth by coagulation, condensation and cloud-processing and removal by dry deposition, in-cloud and below-cloud scavenging. A series of benchmark observational datasets are assembled against which the skill of the model is assessed in terms of normalised mean bias (b) and correlation coefficient (R). Overall, the model performs well against the datasets in simulating concentrations of aerosol precursor gases, chemically speciated particle mass, condensation nuclei (CN) and cloud condensation nuclei (CCN). Surface sulphate, sea-salt and dust mass concentrations are all captured well, while BC and POM are biased low (but correlate well). Surface CN concentrations compare reasonably well in free troposphere and marine sites, but are underestimated at continental and coastal sites related to underestimation of either primary particle emissions or new particle formation. The model compares well against a compilation of CCN observations covering a range of environments and against vertical profiles of size-resolved particle concentrations over Europe. The simulated global burden, lifetime and wet removal of each of the simulated aerosol components is also examined and each lies close to multi-model medians from the AEROCOM model intercomparison exercise.
Particle/Continuum Hybrid Simulation in a Parallel Computing Environment
NASA Technical Reports Server (NTRS)
Baganoff, Donald
1996-01-01
The objective of this study was to modify an existing parallel particle code based on the direct simulation Monte Carlo (DSMC) method to include a Navier-Stokes (NS) calculation so that a hybrid solution could be developed. In carrying out this work, it was determined that the following five issues had to be addressed before extensive program development of a three dimensional capability was pursued: (1) find a set of one-sided kinetic fluxes that are fully compatible with the DSMC method, (2) develop a finite volume scheme to make use of these one-sided kinetic fluxes, (3) make use of the one-sided kinetic fluxes together with DSMC type boundary conditions at a material surface so that velocity slip and temperature slip arise naturally for near-continuum conditions, (4) find a suitable sampling scheme so that the values of the one-sided fluxes predicted by the NS solution at an interface between the two domains can be converted into the correct distribution of particles to be introduced into the DSMC domain, (5) carry out a suitable number of tests to confirm that the developed concepts are valid, individually and in concert for a hybrid scheme.
Ganzenmüller, Georg C.; Hiermaier, Stefan; Steinhauser, Martin O.
2012-01-01
We propose a thermodynamically consistent and energy-conserving temperature coupling scheme between the atomistic and the continuum domain. The coupling scheme links the two domains using the DPDE (Dissipative Particle Dynamics at constant Energy) thermostat and is designed to handle strong temperature gradients across the atomistic/continuum domain interface. The fundamentally different definitions of temperature in the continuum and atomistic domain – internal energy and heat capacity versus particle velocity – are accounted for in a straightforward and conceptually intuitive way by the DPDE thermostat. We verify the here-proposed scheme using a fluid, which is simultaneously represented as a continuum using Smooth Particle Hydrodynamics, and as an atomistically resolved liquid using Molecular Dynamics. In the case of equilibrium contact between both domains, we show that the correct microscopic equilibrium properties of the atomistic fluid are obtained. As an example of a strong non-equilibrium situation, we consider the propagation of a steady shock-wave from the continuum domain into the atomistic domain, and show that the coupling scheme conserves both energy and shock-wave dynamics. To demonstrate the applicability of our scheme to real systems, we consider shock loading of a phospholipid bilayer immersed in water in a multi-scale simulation, an interesting topic of biological relevance. PMID:23300586
Perfectly matched layers in a divergence preserving ADI scheme for electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kraus, C.; ETH Zurich, Chair of Computational Science, 8092 Zuerich; Adelmann, A., E-mail: andreas.adelmann@psi.ch
For numerical simulations of highly relativistic and transversely accelerated charged particles including radiation fast algorithms are needed. While the radiation in particle accelerators has wavelengths in the order of 100 {mu}m the computational domain has dimensions roughly five orders of magnitude larger resulting in very large mesh sizes. The particles are confined to a small area of this domain only. To resolve the smallest scales close to the particles subgrids are envisioned. For reasons of stability the alternating direction implicit (ADI) scheme by Smithe et al. [D.N. Smithe, J.R. Cary, J.A. Carlsson, Divergence preservation in the ADI algorithms for electromagnetics,more » J. Comput. Phys. 228 (2009) 7289-7299] for Maxwell equations has been adopted. At the boundary of the domain absorbing boundary conditions have to be employed to prevent reflection of the radiation. In this paper we show how the divergence preserving ADI scheme has to be formulated in perfectly matched layers (PML) and compare the performance in several scenarios.« less
NASA Astrophysics Data System (ADS)
Rossi, Francesco; Londrillo, Pasquale; Sgattoni, Andrea; Sinigardi, Stefano; Turchetti, Giorgio
2012-12-01
We present `jasmine', an implementation of a fully relativistic, 3D, electromagnetic Particle-In-Cell (PIC) code, capable of running simulations in various laser plasma acceleration regimes on Graphics-Processing-Units (GPUs) HPC clusters. Standard energy/charge preserving FDTD-based algorithms have been implemented using double precision and quadratic (or arbitrary sized) shape functions for the particle weighting. When porting a PIC scheme to the GPU architecture (or, in general, a shared memory environment), the particle-to-grid operations (e.g. the evaluation of the current density) require special care to avoid memory inconsistencies and conflicts. Here we present a robust implementation of this operation that is efficient for any number of particles per cell and particle shape function order. Our algorithm exploits the exposed GPU memory hierarchy and avoids the use of atomic operations, which can hurt performance especially when many particles lay on the same cell. We show the code multi-GPU scalability results and present a dynamic load-balancing algorithm. The code is written using a python-based C++ meta-programming technique which translates in a high level of modularity and allows for easy performance tuning and simple extension of the core algorithms to various simulation schemes.
Simulating Biomass Fast Pyrolysis at the Single Particle Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciesielski, Peter; Wiggins, Gavin; Daw, C Stuart
2017-07-01
Simulating fast pyrolysis at the scale of single particles allows for the investigation of the impacts of feedstock-specific parameters such as particle size, shape, and species of origin. For this reason particle-scale modeling has emerged as an important tool for understanding how variations in feedstock properties affect the outcomes of pyrolysis processes. The origins of feedstock properties are largely dictated by the composition and hierarchical structure of biomass, from the microstructural porosity to the external morphology of milled particles. These properties may be accounted for in simulations of fast pyrolysis by several different computational approaches depending on the level ofmore » structural and chemical complexity included in the model. The predictive utility of particle-scale simulations of fast pyrolysis can still be enhanced substantially by advancements in several areas. Most notably, considerable progress would be facilitated by the development of pyrolysis kinetic schemes that are decoupled from transport phenomena, predict product evolution from whole-biomass with increased chemical speciation, and are still tractable with present-day computational resources.« less
Some Progress in Large-Eddy Simulation using the 3-D Vortex Particle Method
NASA Technical Reports Server (NTRS)
Winckelmans, G. S.
1995-01-01
This two-month visit at CTR was devoted to investigating possibilities in LES modeling in the context of the 3-D vortex particle method (=vortex element method, VEM) for unbounded flows. A dedicated code was developed for that purpose. Although O(N(sup 2)) and thus slow, it offers the advantage that it can easily be modified to try out many ideas on problems involving up to N approx. 10(exp 4) particles. Energy spectrums (which require O(N(sup 2)) operations per wavenumber) are also computed. Progress was realized in the following areas: particle redistribution schemes, relaxation schemes to maintain the solenoidal condition on the particle vorticity field, simple LES models and their VEM extension, possible new avenues in LES. Model problems that involve strong interaction between vortex tubes were computed, together with diagnostics: total vorticity, linear and angular impulse, energy and energy spectrum, enstrophy. More work is needed, however, especially regarding relaxation schemes and further validation and development of LES models for VEM. Finally, what works well will eventually have to be incorporated into the fast parallel tree code.
Smoothed particle hydrodynamics with GRAPE-1A
NASA Technical Reports Server (NTRS)
Umemura, Masayuki; Fukushige, Toshiyuki; Makino, Junichiro; Ebisuzaki, Toshikazu; Sugimoto, Daiichiro; Turner, Edwin L.; Loeb, Abraham
1993-01-01
We describe the implementation of a smoothed particle hydrodynamics (SPH) scheme using GRAPE-1A, a special-purpose processor used for gravitational N-body simulations. The GRAPE-1A calculates the gravitational force exerted on a particle from all other particles in a system, while simultaneously making a list of the nearest neighbors of the particle. It is found that GRAPE-1A accelerates SPH calculations by direct summation by about two orders of magnitudes for a ten thousand-particle simulation. The effective speed is 80 Mflops, which is about 30 percent of the peak speed of GRAPE-1A. Also, in order to investigate the accuracy of GRAPE-SPH, some test simulations were executed. We found that the force and position errors are smaller than those due to representing a fluid by a finite number of particles. The total energy and momentum were conserved within 0.2-0.4 percent and 2-5 x 10 exp -5, respectively, in simulations with several thousand particles. We conclude that GRAPE-SPH is quite effective and sufficiently accurate for self-gravitating hydrodynamics.
Sixth- and eighth-order Hermite integrator for N-body simulations
NASA Astrophysics Data System (ADS)
Nitadori, Keigo; Makino, Junichiro
2008-10-01
We present sixth- and eighth-order Hermite integrators for astrophysical N-body simulations, which use the derivatives of accelerations up to second-order ( snap) and third-order ( crackle). These schemes do not require previous values for the corrector, and require only one previous value to construct the predictor. Thus, they are fairly easy to implement. The additional cost of the calculation of the higher-order derivatives is not very high. Even for the eighth-order scheme, the number of floating-point operations for force calculation is only about two times larger than that for traditional fourth-order Hermite scheme. The sixth-order scheme is better than the traditional fourth-order scheme for most cases. When the required accuracy is very high, the eighth-order one is the best. These high-order schemes have several practical advantages. For example, they allow a larger number of particles to be integrated in parallel than the fourth-order scheme does, resulting in higher execution efficiency in both general-purpose parallel computers and GRAPE systems.
NASA Astrophysics Data System (ADS)
Doi, Hideo; Okuwaki, Koji; Mochizuki, Yuji; Ozawa, Taku; Yasuoka, Kenji
2017-09-01
In dissipative particle dynamics (DPD) simulations, it is necessary to use the so-called χ parameter set that express the effective interactions between particles. Recently, we have developed a new scheme to evaluate the χ parameters in a non-empirical way through a series of fragment molecular orbital (FMO) calculations. As a challenging test, we have performed the DPD simulations using the FMO-based χ parameters for a mixture of 1-Palmitoyl-2-oleoyl phosphatidyl choline (POPC) and water. The structures of both membrane and vesicle were formed successfully. The calculated structural parameters of membrane were in good agreement with experimental results.
Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids.
Aradi, Bálint; Niklasson, Anders M N; Frauenheim, Thomas
2015-07-14
A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born-Oppenheimer molecular dynamics. For systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can be applied to a broad range of problems in materials science, chemistry, and biology.
Application of particle splitting method for both hydrostatic and hydrodynamic cases in SPH
NASA Astrophysics Data System (ADS)
Liu, W. T.; Sun, P. N.; Ming, F. R.; Zhang, A. M.
2018-01-01
Smoothed particle hydrodynamics (SPH) method with numerical diffusive terms shows satisfactory stability and accuracy in some violent fluid-solid interaction problems. However, in most simulations, uniform particle distributions are used and the multi-resolution, which can obviously improve the local accuracy and the overall computational efficiency, has seldom been applied. In this paper, a dynamic particle splitting method is applied and it allows for the simulation of both hydrostatic and hydrodynamic problems. The splitting algorithm is that, when a coarse (mother) particle enters the splitting region, it will be split into four daughter particles, which inherit the physical parameters of the mother particle. In the particle splitting process, conservations of mass, momentum and energy are ensured. Based on the error analysis, the splitting technique is designed to allow the optimal accuracy at the interface between the coarse and refined particles and this is particularly important in the simulation of hydrostatic cases. Finally, the scheme is validated by five basic cases, which demonstrate that the present SPH model with a particle splitting technique is of high accuracy and efficiency and is capable for the simulation of a wide range of hydrodynamic problems.
Hu, Kainan; Zhang, Hongwu; Geng, Shaojuan
2016-10-01
A decoupled scheme based on the Hermite expansion to construct lattice Boltzmann models for the compressible Navier-Stokes equations with arbitrary specific heat ratio is proposed. The local equilibrium distribution function including the rotational velocity of particle is decoupled into two parts, i.e., the local equilibrium distribution function of the translational velocity of particle and that of the rotational velocity of particle. From these two local equilibrium functions, two lattice Boltzmann models are derived via the Hermite expansion, namely one is in relation to the translational velocity and the other is connected with the rotational velocity. Accordingly, the distribution function is also decoupled. After this, the evolution equation is decoupled into the evolution equation of the translational velocity and that of the rotational velocity. The two evolution equations evolve separately. The lattice Boltzmann models used in the scheme proposed by this work are constructed via the Hermite expansion, so it is easy to construct new schemes of higher-order accuracy. To validate the proposed scheme, a one-dimensional shock tube simulation is performed. The numerical results agree with the analytical solutions very well.
A convergent 2D finite-difference scheme for the Dirac–Poisson system and the simulation of graphene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brinkman, D., E-mail: Daniel.Brinkman@asu.edu; School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ 85287; Heitzinger, C., E-mail: Clemens.Heitzinger@asu.edu
2014-01-15
We present a convergent finite-difference scheme of second order in both space and time for the 2D electromagnetic Dirac equation. We apply this method in the self-consistent Dirac–Poisson system to the simulation of graphene. The model is justified for low energies, where the particles have wave vectors sufficiently close to the Dirac points. In particular, we demonstrate that our method can be used to calculate solutions of the Dirac–Poisson system where potentials act as beam splitters or Veselago lenses.
Hydrodynamic interactions in active colloidal crystal microrheology.
Weeber, R; Harting, J
2012-11-01
In dense colloids it is commonly assumed that hydrodynamic interactions do not play a role. However, a found theoretical quantification is often missing. We present computer simulations that are motivated by experiments where a large colloidal particle is dragged through a colloidal crystal. To qualify the influence of long-ranged hydrodynamics, we model the setup by conventional Langevin dynamics simulations and by an improved scheme with limited hydrodynamic interactions. This scheme significantly improves our results and allows to show that hydrodynamics strongly impacts the development of defects, the crystal regeneration, as well as the jamming behavior.
A fictitious domain approach for the simulation of dense suspensions
NASA Astrophysics Data System (ADS)
Gallier, Stany; Lemaire, Elisabeth; Lobry, Laurent; Peters, François
2014-01-01
Low Reynolds number concentrated suspensions do exhibit an intricate physics which can be partly unraveled by the use of numerical simulation. To this end, a Lagrange multiplier-free fictitious domain approach is described in this work. Unlike some methods recently proposed, the present approach is fully Eulerian and therefore does not need any transfer between the Eulerian background grid and some Lagrangian nodes attached to particles. Lubrication forces between particles play an important role in the suspension rheology and have been properly accounted for in the model. A robust and effective lubrication scheme is outlined which consists in transposing the classical approach used in Stokesian Dynamics to our present direct numerical simulation. This lubrication model has also been adapted to account for solid boundaries such as walls. Contact forces between particles are modeled using a classical Discrete Element Method (DEM), a widely used method in granular matter physics. Comprehensive validations are presented on various one-particle, two-particle or three-particle configurations in a linear shear flow as well as some O(103) and O(104) particle simulations.
Improving a Spectral Bin Microphysical Scheme Using TRMM Satellite Observations
NASA Technical Reports Server (NTRS)
Li, Xiaowen; Tao, Wei-Kuo; Matsui, Toshihisa; Liu, Chuntao; Masunaga, Hirohiko
2010-01-01
Comparisons between cloud model simulations and observations are crucial in validating model performance and improving physical processes represented in the mod Tel.hese modeled physical processes are idealized representations and almost always have large rooms for improvements. In this study, we use data from two different sensors onboard TRMM (Tropical Rainfall Measurement Mission) satellite to improve the microphysical scheme in the Goddard Cumulus Ensemble (GCE) model. TRMM observed mature-stage squall lines during late spring, early summer in central US over a 9-year period are compiled and compared with a case simulation by GCE model. A unique aspect of the GCE model is that it has a state-of-the-art spectral bin microphysical scheme, which uses 33 different bins to represent particle size distribution of each of the seven hydrometeor species. A forward radiative transfer model calculates TRMM Precipitation Radar (PR) reflectivity and TRMM Microwave Imager (TMI) 85 GHz brightness temperatures from simulated particle size distributions. Comparisons between model outputs and observations reveal that the model overestimates sizes of snow/aggregates in the stratiform region of the squall line. After adjusting temperature-dependent collection coefficients among ice-phase particles, PR comparisons become good while TMI comparisons worsen. Further investigations show that the partitioning between graupel (a high-density form of aggregate), and snow (a low-density form of aggregate) needs to be adjusted in order to have good comparisons in both PR reflectivity and TMI brightness temperature. This study shows that long-term satellite observations, especially those with multiple sensors, can be very useful in constraining model microphysics. It is also the first study in validating and improving a sophisticated spectral bin microphysical scheme according to long-term satellite observations.
Simulating Self-Assembly with Simple Models
NASA Astrophysics Data System (ADS)
Rapaport, D. C.
Results from recent molecular dynamics simulations of virus capsid self-assembly are described. The model is based on rigid trapezoidal particles designed to form polyhedral shells of size 60, together with an atomistic solvent. The underlying bonding process is fully reversible. More extensive computations are required than in previous work on icosahedral shells built from triangular particles, but the outcome is a high yield of closed shells. Intermediate clusters have a variety of forms, and bond counts provide a useful classification scheme
Evaluation of new collision-pair selection models in DSMC
NASA Astrophysics Data System (ADS)
Akhlaghi, Hassan; Roohi, Ehsan
2017-10-01
The current paper investigates new collision-pair selection procedures in a direct simulation Monte Carlo (DSMC) method. Collision partner selection based on the random procedure from nearest neighbor particles and deterministic selection of nearest neighbor particles have already been introduced as schemes that provide accurate results in a wide range of problems. In the current research, new collision-pair selections based on the time spacing and direction of the relative movement of particles are introduced and evaluated. Comparisons between the new and existing algorithms are made considering appropriate test cases including fluctuations in homogeneous gas, 2D equilibrium flow, and Fourier flow problem. Distribution functions for number of particles and collisions in cell, velocity components, and collisional parameters (collision separation, time spacing, relative velocity, and the angle between relative movements of particles) are investigated and compared with existing analytical relations for each model. The capability of each model in the prediction of the heat flux in the Fourier problem at different cell numbers, numbers of particles, and time steps is examined. For new and existing collision-pair selection schemes, the effect of an alternative formula for the number of collision-pair selections and avoiding repetitive collisions are investigated via the prediction of the Fourier heat flux. The simulation results demonstrate the advantages and weaknesses of each model in different test cases.
An integral equation formulation for rigid bodies in Stokes flow in three dimensions
NASA Astrophysics Data System (ADS)
Corona, Eduardo; Greengard, Leslie; Rachh, Manas; Veerapaneni, Shravan
2017-03-01
We present a new derivation of a boundary integral equation (BIE) for simulating the three-dimensional dynamics of arbitrarily-shaped rigid particles of genus zero immersed in a Stokes fluid, on which are prescribed forces and torques. Our method is based on a single-layer representation and leads to a simple second-kind integral equation. It avoids the use of auxiliary sources within each particle that play a role in some classical formulations. We use a spectrally accurate quadrature scheme to evaluate the corresponding layer potentials, so that only a small number of spatial discretization points per particle are required. The resulting discrete sums are computed in O (n) time, where n denotes the number of particles, using the fast multipole method (FMM). The particle positions and orientations are updated by a high-order time-stepping scheme. We illustrate the accuracy, conditioning and scaling of our solvers with several numerical examples.
Three-moment representation of rain in a cloud microphysics model
NASA Astrophysics Data System (ADS)
Paukert, M.; Fan, J.; Rasch, P. J.; Morrison, H.; Milbrandt, J.; Khain, A.; Shpund, J.
2017-12-01
Two-moment microphysics schemes have been commonly used for cloud simulation in models across different scales - from large-eddy simulations to global climate models. These schemes have yielded valuable insights into cloud and precipitation processes, however the size distributions are limited to two degrees of freedom, and thus the shape parameter is typically fixed or diagnosed. We have developed a three-moment approach for the rain category in order to provide an additional degree of freedom to the size distribution and thereby improve the cloud microphysics representations for more accurate weather and climate simulations. The approach is applied to the Predicted Particle Properties (P3) scheme. In addition to the rain number and mass mixing ratios predicted in the two-moment P3, we now include prognostic equations for the sixth moment of the size distribution (radar reflectivity), thus allowing the shape parameter to evolve freely. We employ the spectral bin microphysics (SBM) model to formulate the three-moment process rates in P3 for drop collisions and breakup. We first test the three-moment scheme with a maritime stratocumulus case from the VOCALS field campaign, and compare the model results with respect to cloud and precipitation properties from the new P3 scheme, original two-moment P3 scheme, SBM, and in-situ aircraft measurements. The improved simulation results by the new P3 scheme will be discussed and physically explained.
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
NASA Astrophysics Data System (ADS)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; Stuehn, Torsten
2017-11-01
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach, the theoretical modeling and scaling laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. These two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.
Vectorization of a particle code used in the simulation of rarefied hypersonic flow
NASA Technical Reports Server (NTRS)
Baganoff, D.
1990-01-01
A limitation of the direct simulation Monte Carlo (DSMC) method is that it does not allow efficient use of vector architectures that predominate in current supercomputers. Consequently, the problems that can be handled are limited to those of one- and two-dimensional flows. This work focuses on a reformulation of the DSMC method with the objective of designing a procedure that is optimized to the vector architectures found on machines such as the Cray-2. In addition, it focuses on finding a better balance between algorithmic complexity and the total number of particles employed in a simulation so that the overall performance of a particle simulation scheme can be greatly improved. Simulations of the flow about a 3D blunt body are performed with 10 to the 7th particles and 4 x 10 to the 5th mesh cells. Good statistics are obtained with time averaging over 800 time steps using 4.5 h of Cray-2 single-processor CPU time.
A Method for Molecular Dynamics on Curved Surfaces
Paquay, Stefan; Kusters, Remy
2016-01-01
Dynamics simulations of constrained particles can greatly aid in understanding the temporal and spatial evolution of biological processes such as lateral transport along membranes and self-assembly of viruses. Most theoretical efforts in the field of diffusive transport have focused on solving the diffusion equation on curved surfaces, for which it is not tractable to incorporate particle interactions even though these play a crucial role in crowded systems. We show here that it is possible to take such interactions into account by combining standard constraint algorithms with the classical velocity Verlet scheme to perform molecular dynamics simulations of particles constrained to an arbitrarily curved surface. Furthermore, unlike Brownian dynamics schemes in local coordinates, our method is based on Cartesian coordinates, allowing for the reuse of many other standard tools without modifications, including parallelization through domain decomposition. We show that by applying the schemes to the Langevin equation for various surfaces, we obtain confined Brownian motion, which has direct applications to many biological and physical problems. Finally we present two practical examples that highlight the applicability of the method: 1) the influence of crowding and shape on the lateral diffusion of proteins in curved membranes; and 2) the self-assembly of a coarse-grained virus capsid protein model. PMID:27028633
A Method for Molecular Dynamics on Curved Surfaces
NASA Astrophysics Data System (ADS)
Paquay, Stefan; Kusters, Remy
2016-03-01
Dynamics simulations of constrained particles can greatly aid in understanding the temporal and spatial evolution of biological processes such as lateral transport along membranes and self-assembly of viruses. Most theoretical efforts in the field of diffusive transport have focussed on solving the diffusion equation on curved surfaces, for which it is not tractable to incorporate particle interactions even though these play a crucial role in crowded systems. We show here that it is possible to combine standard constraint algorithms with the classical velocity Verlet scheme to perform molecular dynamics simulations of particles constrained to an arbitrarily curved surface, in which such interactions can be taken into account. Furthermore, unlike Brownian dynamics schemes in local coordinates, our method is based on Cartesian coordinates allowing for the reuse of many other standard tools without modifications, including parallelisation through domain decomposition. We show that by applying the schemes to the Langevin equation for various surfaces, confined Brownian motion is obtained, which has direct applications to many biological and physical problems. Finally we present two practical examples that highlight the applicability of the method: (i) the influence of crowding and shape on the lateral diffusion of proteins in curved membranes and (ii) the self-assembly of a coarse-grained virus capsid protein model.
NASA Technical Reports Server (NTRS)
Ovchinnikov, Mikhail; Ackerman, Andrew S.; Avramov, Alexander; Cheng, Anning; Fan, Jiwen; Fridlind, Ann M.; Ghan, Steven; Harrington, Jerry; Hoose, Corinna; Korolev, Alexei;
2014-01-01
Large-eddy simulations of mixed-phase Arctic clouds by 11 different models are analyzed with the goal of improving understanding and model representation of processes controlling the evolution of these clouds. In a case based on observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC), it is found that ice number concentration, Ni, exerts significant influence on the cloud structure. Increasing Ni leads to a substantial reduction in liquid water path (LWP), in agreement with earlier studies. In contrast to previous intercomparison studies, all models here use the same ice particle properties (i.e., mass-size, mass-fall speed, and mass-capacitance relationships) and a common radiation parameterization. The constrained setup exposes the importance of ice particle size distributions (PSDs) in influencing cloud evolution. A clear separation in LWP and IWP predicted by models with bin and bulk microphysical treatments is documented and attributed primarily to the assumed shape of ice PSD used in bulk schemes. Compared to the bin schemes that explicitly predict the PSD, schemes assuming exponential ice PSD underestimate ice growth by vapor deposition and overestimate mass-weighted fall speed leading to an underprediction of IWP by a factor of two in the considered case. Sensitivity tests indicate LWP and IWP are much closer to the bin model simulations when a modified shape factor which is similar to that predicted by bin model simulation is used in bulk scheme. These results demonstrate the importance of representation of ice PSD in determining the partitioning of liquid and ice and the longevity of mixed-phase clouds.
Particle-In-Cell simulations of electron beam microbunching instability in three dimensions
NASA Astrophysics Data System (ADS)
Huang, Chengkun; Zeng, Y.; Meyers, M. D.; Yi, S.; Albright, B. J.; Kwan, T. J. T.
2013-10-01
Microbunching instability due to Coherent Synchrotron Radiation (CSR) in a magnetic chicane is one of the major effects that can degrade the electron beam quality in an X-ray Free Electron Laser. Self-consistent simulation using the Particle-In-Cell (PIC) method for the CSR fields of the beam and their effects on beam dynamics have been elusive due to the excessive dispersion error on the grid. We have implemented a high-order finite-volume PIC scheme that models the propagation of the CSR fields accurately. This new scheme is characterized and optimized through a detailed dispersion analysis. The CSR fields from our improved PIC calculation are compared to the extended CSR numerical model based on the Lienard-Wiechert formula in 2D/3D. We also conduct beam dynamics simulation of the microbunching instability using our new PIC capability. Detailed self-consistent PIC simulations of the CSR fields and beam dynamics will be presented and discussed. Work supported by the U.S. Department of Energy through the LDRD program at Los Alamos National Laboratory.
Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aradi, Bálint; Niklasson, Anders M. N.; Frauenheim, Thomas
A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born–Oppenheimer molecular dynamics. Furthermore, for systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can also be applied to a broad range of problems in materialsmore » science, chemistry, and biology.« less
Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids
Aradi, Bálint; Niklasson, Anders M. N.; Frauenheim, Thomas
2015-06-26
A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born–Oppenheimer molecular dynamics. Furthermore, for systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can also be applied to a broad range of problems in materialsmore » science, chemistry, and biology.« less
Simulations of the failure scenarios of the crab cavities for the nominal scheme of the LHC
NASA Astrophysics Data System (ADS)
Yee, B.; Calaga, R.; Zimmermann, F.; Lopez, R.
2012-02-01
The Crab Cavity (CC) represents a possible solution to the problem of the reduction in luminosity due to the impact angle of two colliding beams. The CC is a Radio Frequency (RF) superconducting cavity which applies a transversal kick into a bunch of particles producing a rotation in order to have a head-on collision to improve the luminosity. For this reason people at the Beams Department-Accelerators & Beams Physics of CERN (BE-ABP) have studied the implementation of the CC scheme at the LHC. It is essential to study the failure scenarios and the damage that can be produced to the lattice devices. We have performed simulations of these failures for the nominal scheme.
NASA Technical Reports Server (NTRS)
Nishikawa, K.-I.; Mizuno, Y.; Watson, M.; Fuerst, S.; Wu, K.; Hardee, P.; Fishman, G. J.
2007-01-01
We have developed a new three-dimensional general relativistic magnetohydrodynamic (GRMHD) code by using a conservative, high-resolution shock-capturing scheme. The numerical fluxes are calculated using the HLL approximate Riemann solver scheme. The flux-interpolated constrained transport scheme is used to maintain a divergence-free magnetic field. We have performed various 1-dimensional test problems in both special and general relativity by using several reconstruction methods and found that the new 3D GRMHD code shows substantial improvements over our previous code. The simulation results show the jet formations from a geometrically thin accretion disk near a nonrotating and a rotating black hole. We will discuss the jet properties depended on the rotation of a black hole and the magnetic field configuration including issues for future research. A General Relativistic Particle-in-Cell Code (GRPIC) has been developed using the Kerr-Schild metric. The code includes kinetic effects, and is in accordance with GRMHD code. Since the gravitational force acting on particles is extreme near black holes, there are some difficulties in numerically describing these processes. The preliminary code consists of an accretion disk and free-falling corona. Results indicate that particles are ejected from the black hole. These results are consistent with other GRMHD simulations. The GRPIC simulation results will be presented, along with some remarks and future improvements. The emission is calculated from relativistic flows in black hole systems using a fully general relativistic radiative transfer formulation, with flow structures obtained by GRMHD simulations considering thermal free-free emission and thermal synchrotron emission. Bright filament-like features protrude (visually) from the accretion disk surface, which are enhancements of synchrotron emission where the magnetic field roughly aligns with the line-of-sight in the co-moving frame. The features move back and forth as the accretion flow evolves, but their visibility and morphology are robust. We would like to extend this research using GRPIC simulations and examine a possible new mechanism for certain X-ray quasi-periodic oscillations (QPOs) observed in blackhole X-ray binaries.
Monte Carlo charged-particle tracking and energy deposition on a Lagrangian mesh.
Yuan, J; Moses, G A; McKenty, P W
2005-10-01
A Monte Carlo algorithm for alpha particle tracking and energy deposition on a cylindrical computational mesh in a Lagrangian hydrodynamics code used for inertial confinement fusion (ICF) simulations is presented. The straight line approximation is used to follow propagation of "Monte Carlo particles" which represent collections of alpha particles generated from thermonuclear deuterium-tritium (DT) reactions. Energy deposition in the plasma is modeled by the continuous slowing down approximation. The scheme addresses various aspects arising in the coupling of Monte Carlo tracking with Lagrangian hydrodynamics; such as non-orthogonal severely distorted mesh cells, particle relocation on the moving mesh and particle relocation after rezoning. A comparison with the flux-limited multi-group diffusion transport method is presented for a polar direct drive target design for the National Ignition Facility. Simulations show the Monte Carlo transport method predicts about earlier ignition than predicted by the diffusion method, and generates higher hot spot temperature. Nearly linear speed-up is achieved for multi-processor parallel simulations.
Continuous-feed optical sorting of aerosol particles
Curry, J. J.; Levine, Zachary H.
2016-01-01
We consider the problem of sorting, by size, spherical particles of order 100 nm radius. The scheme we analyze consists of a heterogeneous stream of spherical particles flowing at an oblique angle across an optical Gaussian mode standing wave. Sorting is achieved by the combined spatial and size dependencies of the optical force. Particles of all sizes enter the flow at a point, but exit at different locations depending on size. Exiting particles may be detected optically or separated for further processing. The scheme has the advantages of accommodating a high throughput, producing a continuous stream of continuously dispersed particles, and exhibiting excellent size resolution. We performed detailed Monte Carlo simulations of particle trajectories through the optical field under the influence of convective air flow. We also developed a method for deriving effective velocities and diffusion constants from the Fokker-Planck equation that can generate equivalent results much more quickly. With an optical wavelength of 1064 nm, polystyrene particles with radii in the neighborhood of 275 nm, for which the optical force vanishes, may be sorted with a resolution below 1 nm. PMID:27410570
PAH concentrations simulated with the AURAMS-PAH chemical transport model over Canada and the USA
NASA Astrophysics Data System (ADS)
Galarneau, E.; Makar, P. A.; Zheng, Q.; Narayan, J.; Zhang, J.; Moran, M. D.; Bari, M. A.; Pathela, S.; Chen, A.; Chlumsky, R.
2014-04-01
The offline Eulerian AURAMS (A Unified Regional Air quality Modelling System) chemical transport model was adapted to simulate airborne concentrations of seven PAHs (polycyclic aromatic hydrocarbons): phenanthrene, anthracene, fluoranthene, pyrene, benz[a]anthracene, chrysene + triphenylene, and benzo[a]pyrene. The model was then run for the year 2002 with hourly output on a grid covering southern Canada and the continental USA with 42 km horizontal grid spacing. Model predictions were compared to ~5000 24 h-average PAH measurements from 45 sites, most of which were located in urban or industrial areas. Eight of the measurement sites also provided data on particle/gas partitioning which had been modelled using two alternative schemes. This is the first known regional modelling study for PAHs over a North American domain and the first modelling study at any scale to compare alternative particle/gas partitioning schemes against paired field measurements. The goal of the study was to provide output concentration maps of use to assessing human inhalation exposure to PAHs in ambient air. Annual average modelled total (gas + particle) concentrations were statistically indistinguishable from measured values for fluoranthene, pyrene and benz[a]anthracene whereas the model underestimated concentrations of phenanthrene, anthracene and chrysene + triphenylene. Significance for benzo[a]pyrene performance was close to the statistical threshold and depended on the particle/gas partitioning scheme employed. On a day-to-day basis, the model simulated total PAH concentrations to the correct order of magnitude the majority of the time. The model showed seasonal differences in prediction quality for volatile species which suggests that a missing emission source such as air-surface exchange should be included in future versions. Model performance differed substantially between measurement locations and the limited available evidence suggests that the model's spatial resolution was too coarse to capture the distribution of concentrations in densely populated areas. A more detailed analysis of the factors influencing modelled particle/gas partitioning is warranted based on the findings in this study.
NASA Astrophysics Data System (ADS)
Martelloni, Gianluca; Bagnoli, Franco; Guarino, Alessio
2017-09-01
We present a three-dimensional model of rain-induced landslides, based on cohesive spherical particles. The rainwater infiltration into the soil follows either the fractional or the fractal diffusion equations. We analytically solve the fractal partial differential equation (PDE) for diffusion with particular boundary conditions to simulate a rainfall event. We developed a numerical integration scheme for the PDE, compared with the analytical solution. We adapt the fractal diffusion equation obtaining the gravimetric water content that we use as input of a triggering scheme based on Mohr-Coulomb limit-equilibrium criterion. This triggering is then complemented by a standard molecular dynamics algorithm, with an interaction force inspired by the Lennard-Jones potential, to update the positions and velocities of particles. We present our results for homogeneous and heterogeneous systems, i.e., systems composed by particles with same or different radius, respectively. Interestingly, in the heterogeneous case, we observe segregation effects due to the different volume of the particles. Finally, we analyze the parameter sensibility both for the triggering and the propagation phases. Our simulations confirm the results of a previous two-dimensional model and therefore the feasible applicability to real cases.
SWIFT: SPH With Inter-dependent Fine-grained Tasking
NASA Astrophysics Data System (ADS)
Schaller, Matthieu; Gonnet, Pedro; Chalk, Aidan B. G.; Draper, Peter W.
2018-05-01
SWIFT runs cosmological simulations on peta-scale machines for solving gravity and SPH. It uses the Fast Multipole Method (FMM) to calculate gravitational forces between nearby particles, combining these with long-range forces provided by a mesh that captures both the periodic nature of the calculation and the expansion of the simulated universe. SWIFT currently uses a single fixed but time-variable softening length for all the particles. Many useful external potentials are also available, such as galaxy haloes or stratified boxes that are used in idealised problems. SWIFT implements a standard LCDM cosmology background expansion and solves the equations in a comoving frame; equations of state of dark-energy evolve with scale-factor. The structure of the code allows implementation for modified-gravity solvers or self-interacting dark matter schemes to be implemented. Many hydrodynamics schemes are implemented in SWIFT and the software allows users to add their own.
Collisionless Spectral Kinetic Simulation of Ideal Multipole Resonance Probe
NASA Astrophysics Data System (ADS)
Gong, Junbo; Wilczek, Sebastian; Szeremley, Daniel; Oberrath, Jens; Eremin, Denis; Dobrygin, Wladislaw; Schilling, Christian; Friedrichs, Michael; Brinkmann, Ralf Peter
2016-09-01
Active Plasma Resonance Spectroscopy denotes a class of industry-compatible plasma diagnostic methods which utilize the natural ability of plasmas to resonate on or near the electron plasma frequency ωpe. One particular realization of APRS with a high degree of geometric and electric symmetry is the Multipole Resonance Probe (MRP). The Ideal MRP(IMRP) is an even more symmetric idealization which is suited for theoretical investigations. In this work, a spectral kinetic scheme is presented to investigate the behavior of the IMRP in the low pressure regime. However, due to the velocity difference, electrons are treated as particles whereas ions are only considered as stationary background. In the scheme, the particle pusher integrates the equations of motion for the studied particles, the Poisson solver determines the electric field at each particle position. The proposed method overcomes the limitation of the cold plasma model and covers kinetic effects like collisionless damping.
Preferential Concentration Of Solid Particles In Turbulent Horizontal Circular Pipe Flow
NASA Astrophysics Data System (ADS)
Kim, Jaehee; Yang, Kyung-Soo
2017-11-01
In particle-laden turbulent pipe flow, turbophoresis can lead to a preferential concentration of particles near the wall. To investigate this phenomenon, one-way coupled Direct Numerical Simulation (DNS) has been performed. Fully-developed turbulent pipe flow of the carrier fluid (air) is at Reτ = 200 based on the pipe radius and the mean friction velocity, whereas the Stokes numbers of the particles (solid) are St+ = 0.1 , 1 , 10 based on the mean friction velocity and the kinematic viscosity of the fluid. The computational domain for particle simulation is extended along the axial direction by duplicating the domain of the fluid simulation. By doing so, particle statistics in the spatially developing region as well as in the fully-developed region can be obtained. Accumulation of particles has been noticed at St+ = 1 and 10 mostly in the viscous sublayer, more intensive in the latter case. Compared with other authors' previous results, our results suggest that drag force on the particles should be computed by using an empirical correlation and a higher-order interpolation scheme even in a low-Re regime in order to improve the accuracy of particle simulation. This work was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MSIP) (No. 2015R1A2A2A01002981).
Impacts of a Stochastic Ice Mass-Size Relationship on Squall Line Ensemble Simulations
NASA Astrophysics Data System (ADS)
Stanford, M.; Varble, A.; Morrison, H.; Grabowski, W.; McFarquhar, G. M.; Wu, W.
2017-12-01
Cloud and precipitation structure, evolution, and cloud radiative forcing of simulated mesoscale convective systems (MCSs) are significantly impacted by ice microphysics parameterizations. Most microphysics schemes assume power law relationships with constant parameters for ice particle mass, area, and terminal fallspeed relationships as a function of size, despite observations showing that these relationships vary in both time and space. To account for such natural variability, a stochastic representation of ice microphysical parameters was developed using the Predicted Particle Properties (P3) microphysics scheme in the Weather Research and Forecasting model, guided by in situ aircraft measurements from a number of field campaigns. Here, the stochastic framework is applied to the "a" and "b" parameters of the unrimed ice mass-size (m-D) relationship (m=aDb) with co-varying "a" and "b" values constrained by observational distributions tested over a range of spatiotemporal autocorrelation scales. Diagnostically altering a-b pairs in three-dimensional (3D) simulations of the 20 May 2011 Midlatitude Continental Convective Clouds Experiment (MC3E) squall line suggests that these parameters impact many important characteristics of the simulated squall line, including reflectivity structure (particularly in the anvil region), surface rain rates, surface and top of atmosphere radiative fluxes, buoyancy and latent cooling distributions, and system propagation speed. The stochastic a-b P3 scheme is tested using two frameworks: (1) a large ensemble of two-dimensional idealized squall line simulations and (2) a smaller ensemble of 3D simulations of the 20 May 2011 squall line, for which simulations are evaluated using observed radar reflectivity and radial velocity at multiple wavelengths, surface meteorology, and surface and satellite measured longwave and shortwave radiative fluxes. Ensemble spreads are characterized and compared against initial condition ensemble spreads for a range of variables.
On the modeling of the 2010 Gulf of Mexico Oil Spill
NASA Astrophysics Data System (ADS)
Mariano, A. J.; Kourafalou, V. H.; Srinivasan, A.; Kang, H.; Halliwell, G. R.; Ryan, E. H.; Roffer, M.
2011-09-01
Two oil particle trajectory forecasting systems were developed and applied to the 2010 Deepwater Horizon Oil Spill in the Gulf of Mexico. Both systems use ocean current fields from high-resolution numerical ocean circulation model simulations, Lagrangian stochastic models to represent unresolved sub-grid scale variability to advect oil particles, and Monte Carlo-based schemes for representing uncertain biochemical and physical processes. The first system assumes two-dimensional particle motion at the ocean surface, the oil is in one state, and the particle removal is modeled as a Monte Carlo process parameterized by a one number removal rate. Oil particles are seeded using both initial conditions based on observations and particles released at the location of the Maconda well. The initial conditions (ICs) of oil particle location for the two-dimensional surface oil trajectory forecasts are based on a fusing of all available information including satellite-based analyses. The resulting oil map is digitized into a shape file within which a polygon filling software generates longitude and latitude with variable particle density depending on the amount of oil present in the observations for the IC. The more complex system assumes three (light, medium, heavy) states for the oil, each state has a different removal rate in the Monte Carlo process, three-dimensional particle motion, and a particle size-dependent oil mixing model. Simulations from the two-dimensional forecast system produced results that qualitatively agreed with the uncertain "truth" fields. These simulations validated the use of our Monte Carlo scheme for representing oil removal by evaporation and other weathering processes. Eulerian velocity fields for predicting particle motion from data-assimilative models produced better particle trajectory distributions than a free running model with no data assimilation. Monte Carlo simulations of the three-dimensional oil particle trajectory, whose ensembles were generated by perturbing the size of the oil particles and the fraction in a given size range that are released at depth, the two largest unknowns in this problem. 36 realizations of the model were run with only subsurface oil releases. An average of these results yields that after three months, about 25% of the oil remains in the water column and that most of the oil is below 800 m.
Biegert, Edward; Vowinckel, Bernhard; Meiburg, Eckart
2017-03-21
We present a collision model for phase-resolved Direct Numerical Simulations of sediment transport that couple the fluid and particles by the Immersed Boundary Method. Typically, a contact model for these types of simulations comprises a lubrication force for particles in close proximity to another solid object, a normal contact force to prevent particles from overlapping, and a tangential contact force to account for friction. Our model extends the work of previous authors to improve upon the time integration scheme to obtain consistent results for particle-wall collisions. Furthermore, we account for polydisperse spherical particles and introduce new criteria to account formore » enduring contact, which occurs in many sediment transport situations. This is done without using arbitrary values for physically-defined parameters and by maintaining the full momentum balance of a particle in enduring contact. Lastly, we validate our model against several test cases for binary particle-wall collisions as well as the collective motion of a sediment bed sheared by a viscous flow, yielding satisfactory agreement with experimental data by various authors.« less
NASA Astrophysics Data System (ADS)
Biegert, Edward; Vowinckel, Bernhard; Meiburg, Eckart
2017-07-01
We present a collision model for phase-resolved Direct Numerical Simulations of sediment transport that couple the fluid and particles by the Immersed Boundary Method. Typically, a contact model for these types of simulations comprises a lubrication force for particles in close proximity to another solid object, a normal contact force to prevent particles from overlapping, and a tangential contact force to account for friction. Our model extends the work of previous authors to improve upon the time integration scheme to obtain consistent results for particle-wall collisions. Furthermore, we account for polydisperse spherical particles and introduce new criteria to account for enduring contact, which occurs in many sediment transport situations. This is done without using arbitrary values for physically-defined parameters and by maintaining the full momentum balance of a particle in enduring contact. We validate our model against several test cases for binary particle-wall collisions as well as the collective motion of a sediment bed sheared by a viscous flow, yielding satisfactory agreement with experimental data by various authors.
NASA Astrophysics Data System (ADS)
Lee, H.; Fridlind, A. M.; Ackerman, A. S.; Kollias, P.
2017-12-01
Cloud radar Doppler spectra provide rich information for evaluating the fidelity of particle size distributions from cloud models. The intrinsic simplifications of bulk microphysics schemes generally preclude the generation of plausible Doppler spectra, unlike bin microphysics schemes, which develop particle size distributions more organically at substantial computational expense. However, bin microphysics schemes face the difficulty of numerical diffusion leading to overly rapid large drop formation, particularly while solving the stochastic collection equation (SCE). Because such numerical diffusion can cause an even greater overestimation of radar reflectivity, an accurate method for solving the SCE is essential for bin microphysics schemes to accurately simulate Doppler spectra. While several methods have been proposed to solve the SCE, here we examine those of Berry and Reinhardt (1974, BR74), Jacobson et al. (1994, J94), and Bott (2000, B00). Using a simple box model to simulate drop size distribution evolution during precipitation formation with a realistic kernel, it is shown that each method yields a converged solution as the resolution of the drop size grid increases. However, the BR74 and B00 methods yield nearly identical size distributions in time, whereas the J94 method produces consistently larger drops throughout the simulation. In contrast to an earlier study, the performance of the B00 method is found to be satisfactory; it converges at relatively low resolution and long time steps, and its computational efficiency is the best among the three methods considered here. Finally, a series of idealized stratocumulus large-eddy simulations are performed using the J94 and B00 methods. The reflectivity size distributions and Doppler spectra obtained from the different SCE solution methods are presented and compared with observations.
Hybrid finite-volume/transported PDF method for the simulation of turbulent reactive flows
NASA Astrophysics Data System (ADS)
Raman, Venkatramanan
A novel computational scheme is formulated for simulating turbulent reactive flows in complex geometries with detailed chemical kinetics. A Probability Density Function (PDF) based method that handles the scalar transport equation is coupled with an existing Finite Volume (FV) Reynolds-Averaged Navier-Stokes (RANS) flow solver. The PDF formulation leads to closed chemical source terms and facilitates the use of detailed chemical mechanisms without approximations. The particle-based PDF scheme is modified to handle complex geometries and grid structures. Grid-independent particle evolution schemes that scale linearly with the problem size are implemented in the Monte-Carlo PDF solver. A novel algorithm, in situ adaptive tabulation (ISAT) is employed to ensure tractability of complex chemistry involving a multitude of species. Several non-reacting test cases are performed to ascertain the efficiency and accuracy of the method. Simulation results from a turbulent jet-diffusion flame case are compared against experimental data. The effect of micromixing model, turbulence model and reaction scheme on flame predictions are discussed extensively. Finally, the method is used to analyze the Dow Chlorination Reactor. Detailed kinetics involving 37 species and 158 reactions as well as a reduced form with 16 species and 21 reactions are used. The effect of inlet configuration on reactor behavior and product distribution is analyzed. Plant-scale reactors exhibit quenching phenomena that cannot be reproduced by conventional simulation methods. The FV-PDF method predicts quenching accurately and provides insight into the dynamics of the reactor near extinction. The accuracy of the fractional time-stepping technique in discussed in the context of apparent multiple-steady states observed in a non-premixed feed configuration of the chlorination reactor.
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; ...
2017-11-27
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less
Orio, Patricio; Soudry, Daniel
2012-01-01
Background The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled gating particles, while the DA was modeled using uncoupled gating particles. Implementations of DA with coupled particles, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. Main Contributions We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable – allowing an easy, transparent and efficient DA implementation, avoiding unnecessary approximations. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods, except when short time steps or low channel numbers were used. PMID:22629320
Development of Spaceborne Radar Simulator by NICT and JAXA using JMA Cloud-resolving Model
NASA Astrophysics Data System (ADS)
Kubota, T.; Eito, H.; Aonashi, K.; Hashimoto, A.; Iguchi, T.; Hanado, H.; Shimizu, S.; Yoshida, N.; Oki, R.
2009-12-01
We are developing synthetic spaceborne radar data toward a simulation of the Dual-frequency Precipitation Radar (DPR) aboard the Global Precipitation Measurement (GPM) core-satellite. Our purposes are a production of test-bed data for higher level DPR algorithm developers, in addition to a diagnosis of a cloud resolving model (CRM). To make the synthetic data, we utilize the CRM by the Japan Meteorological Agency (JMA-NHM) (Ikawa and Saito 1991, Saito et al. 2006, 2007), and the spaceborne radar simulation algorithm by the National Institute of Information and Communications Technology (NICT) and the Japan Aerospace Exploration Agency (JAXA) named as the Integrated Satellite Observation Simulator for Radar (ISOSIM-Radar). The ISOSIM-Radar simulates received power data in a field of view of the spaceborne radar with consideration to a scan angle of the radar (Oouchi et al. 2002, Kubota et al. 2009). The received power data are computed with gaseous and hydrometeor attenuations taken into account. The backscattering and extinction coefficients are calculated assuming the Mie approximation for all species. The dielectric constants for solid particles are computed by the Maxwell-Garnett model (Bohren and Battan 1982). Drop size distributions are treated in accordance with those of the JMA-NHM. We assume a spherical sea surface, a Gaussian antenna pattern, and 49 antenna beam directions for scan angles from -17 to 17 deg. in the PR. In this study, we report the diagnosis of the JMA-NHM with reference to the TRMM Precipitation Radar (PR) and CloudSat Cloud Profiling Radar (CPR) using the ISOSIM-Radar from the view of comparisons in cloud microphysics schemes of the JMA-NHM. We tested three kinds of explicit bulk microphysics schemes based on Lin et al. (1983), that is, three-ice 1-moment scheme, three-ice 2-moment scheme (Eito and Aonashi 2009), and newly developed four-ice full 2-moment scheme (Hashimoto 2008). The hydrometeor species considered here are rain, graupel, snow, cloud water, cloud ice and hail (4-ice scheme only). We examined a case of an intersection with the TRMM PR and the CloudSat CPR on 6th April 2008 over sea surface in the south of Kyushu Island of Japan. In this work, observed rainfall systems are simulated with one-way double nested domains having horizontal grid sizes of 5 km (outer) and 2 km (inner). Data used here are from the inner domain only. Results of the PR indicated better performances of 2-moment bulk schemes. It suggests that prognostic number concentrations of frozen hydrometeors are more effective in high altitudes and constant number concentrations can lead to the overestimation of the snow there. For three-ice schemes, simulated received power data overestimated above freezing levels with reference to the observed data. In contrast, the overestimation of frozen particles was heavily reduced for the four-ice scheme.
NASA Astrophysics Data System (ADS)
Wang, Kai; Zhang, Yang; Zhang, Xin; Fan, Jiwen; Leung, L. Ruby; Zheng, Bo; Zhang, Qiang; He, Kebin
2018-03-01
An advanced online-coupled meteorology and chemistry model WRF-CAM5 has been applied to East Asia using triple-nested domains at different grid resolutions (i.e., 36-, 12-, and 4-km) to simulate a severe dust storm period in spring 2010. Analyses are performed to evaluate the model performance and investigate model sensitivity to different horizontal grid sizes and aerosol activation parameterizations and to examine aerosol-cloud interactions and their impacts on the air quality. A comprehensive model evaluation of the baseline simulations using the default Abdul-Razzak and Ghan (AG) aerosol activation scheme shows that the model can well predict major meteorological variables such as 2-m temperature (T2), water vapor mixing ratio (Q2), 10-m wind speed (WS10) and wind direction (WD10), and shortwave and longwave radiation across different resolutions with domain-average normalized mean biases typically within ±15%. The baseline simulations also show moderate biases for precipitation and moderate-to-large underpredictions for other major variables associated with aerosol-cloud interactions such as cloud droplet number concentration (CDNC), cloud optical thickness (COT), and cloud liquid water path (LWP) due to uncertainties or limitations in the aerosol-cloud treatments. The model performance is sensitive to grid resolutions, especially for surface meteorological variables such as T2, Q2, WS10, and WD10, with the performance generally improving at finer grid resolutions for those variables. Comparison of the sensitivity simulations with an alternative (i.e., the Fountoukis and Nenes (FN) series scheme) and the default (i.e., AG scheme) aerosol activation scheme shows that the former predicts larger values for cloud variables such as CDNC and COT across all grid resolutions and improves the overall domain-average model performance for many cloud/radiation variables and precipitation. Sensitivity simulations using the FN series scheme also have large impacts on radiations, T2, precipitation, and air quality (e.g., decreasing O3) through complex aerosol-radiation-cloud-chemistry feedbacks. The inclusion of adsorptive activation of dust particles in the FN series scheme has similar impacts on the meteorology and air quality but to lesser extent as compared to differences between the FN series and AG schemes. Compared to the overall differences between the FN series and AG schemes, impacts of adsorptive activation of dust particles can contribute significantly to the increase of total CDNC (∼45%) during dust storm events and indicate their importance in modulating regional climate over East Asia.
Particle Based Simulations of Complex Systems with MP2C : Hydrodynamics and Electrostatics
NASA Astrophysics Data System (ADS)
Sutmann, Godehard; Westphal, Lidia; Bolten, Matthias
2010-09-01
Particle based simulation methods are well established paths to explore system behavior on microscopic to mesoscopic time and length scales. With the development of new computer architectures it becomes more and more important to concentrate on local algorithms which do not need global data transfer or reorganisation of large arrays of data across processors. This requirement strongly addresses long-range interactions in particle systems, i.e. mainly hydrodynamic and electrostatic contributions. In this article, emphasis is given to the implementation and parallelization of the Multi-Particle Collision Dynamics method for hydrodynamic contributions and a splitting scheme based on Multigrid for electrostatic contributions. Implementations are done for massively parallel architectures and are demonstrated for the IBM Blue Gene/P architecture Jugene in Jülich.
Perturbative Particle Simulation for an Intense Ion Beam in a Periodic Quadrupole Focusing Field
NASA Astrophysics Data System (ADS)
Lee, W. W.
1996-11-01
footnotetext[1]This work is supported the DOE contract DE-AC02-76-CHO-3073. footnotetext[2]In collaboration with Q. Qian and R. C. Davidson, PPPL. Stability and transport properties of an intense ion beam propagating through an alternating-gradient quadrupole focusing field with initial Kapchinskij-Vladimirskij (KV) distribution(I. M. Kapchinksij and V. V. Vladimirskj, Proceedings of the International Conference on High Energy Accelerators and Instrumentation (CERN Geneva, 1959), p. 274.) are studied using newly-developed perturbative particle simulation techniques. Specifically, two different schemes have been investigated: the first is based on the δ f scheme originally developed for tokamak plasmas,(A. Dimits and W. W. Lee, J. Comput. Phys. 107), 309 (1993); S. Parker and W. W. Lee, Phys. Fluids B 5, 77 (1993). and the other is related to the linearized trajectory scheme.(J. Byers, Proceedings of the 4th Conference on Numerical Simulation of Plasmas, (NRL, Washington D.C., 1970),p.496.) While the former is useful for both linear and nonlinear simulations, the latter can be used for benchmark purpose. Stability properties and associated mode structures are investigated over a wide range of beam current and focusing field strength. The new schemes are found to be highly effective in describing detailed properties of beam stability and propagation over long distances. For example, a stable KV beam can indeed propagate over hundreds of lattice period in the simulation with negligible growth. On the other hand, in the unstable region when the beam current is sufficiently high,(I. Hoffman, L. Laslett, L. Smith, and I. Haber, Particle Accelerators 13), 145 (1983). large-amplitude density perturbations with (δ n)_max/hatn0 ~ 1 with low azimuthal harmonic numbers, concentrated near the beam surface, are observed. The corresponding mode structures are of Gaussian shape in the radial direction. The physics of nonlinear saturation and emittance growth will be discussed. The schemes can also be applied to other choices of injected distribution function. It is also intended to use them to study issues such as halo formation, stochasticity, charge homogenization, entropy production and collisionless dissipation. These are critical physics problems in heavy ion fusion and other related fields, that rely on high-brightness and high-current ion beams to deliver high power to the target.(E. P. Lee and J. Hovingh, Fusion Technology 15), 369 (1989). Some of these issues were the focus of a recent investigation.(Q. Qian, R. C. Davidson, and C. Chen, Phys. Plasmas 1), 2674 (1995).
NASA Astrophysics Data System (ADS)
Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars
2018-02-01
The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.
Efficient parallelization of analytic bond-order potentials for large-scale atomistic simulations
NASA Astrophysics Data System (ADS)
Teijeiro, C.; Hammerschmidt, T.; Drautz, R.; Sutmann, G.
2016-07-01
Analytic bond-order potentials (BOPs) provide a way to compute atomistic properties with controllable accuracy. For large-scale computations of heterogeneous compounds at the atomistic level, both the computational efficiency and memory demand of BOP implementations have to be optimized. Since the evaluation of BOPs is a local operation within a finite environment, the parallelization concepts known from short-range interacting particle simulations can be applied to improve the performance of these simulations. In this work, several efficient parallelization methods for BOPs that use three-dimensional domain decomposition schemes are described. The schemes are implemented into the bond-order potential code BOPfox, and their performance is measured in a series of benchmarks. Systems of up to several millions of atoms are simulated on a high performance computing system, and parallel scaling is demonstrated for up to thousands of processors.
PSO-tuned PID controller for coupled tank system via priority-based fitness scheme
NASA Astrophysics Data System (ADS)
Jaafar, Hazriq Izzuan; Hussien, Sharifah Yuslinda Syed; Selamat, Nur Asmiza; Abidin, Amar Faiz Zainal; Aras, Mohd Shahrieel Mohd; Nasir, Mohamad Na'im Mohd; Bohari, Zul Hasrizal
2015-05-01
The industrial applications of Coupled Tank System (CTS) are widely used especially in chemical process industries. The overall process is require liquids to be pumped, stored in the tank and pumped again to another tank. Nevertheless, the level of liquid in tank need to be controlled and flow between two tanks must be regulated. This paper presents development of an optimal PID controller for controlling the desired liquid level of the CTS. Two method of Particle Swarm Optimization (PSO) algorithm will be tested in optimizing the PID controller parameters. These two methods of PSO are standard Particle Swarm Optimization (PSO) and Priority-based Fitness Scheme in Particle Swarm Optimization (PFPSO). Simulation is conducted within Matlab environment to verify the performance of the system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). It has been demonstrated that implementation of PSO via Priority-based Fitness Scheme (PFPSO) for this system is potential technique to control the desired liquid level and improve the system performances compared with standard PSO.
NASA Astrophysics Data System (ADS)
Vecil, Francesco; Lafitte, Pauline; Rosado Linares, Jesús
2013-10-01
We study at particle and kinetic level a collective behavior model based on three phenomena: self-propulsion, friction (Rayleigh effect) and an attractive/repulsive (Morse) potential rescaled so that the total mass of the system remains constant independently of the number of particles N. In the first part of the paper, we introduce the particle model: the agents are numbered and described by their position and velocity. We identify five parameters that govern the possible asymptotic states for this system (clumps, spheres, dispersion, mills, rigid-body rotation, flocks) and perform a numerical analysis on the 3D setting. Then, in the second part of the paper, we describe the kinetic system derived as the limit from the particle model as N tends to infinity; we propose, in 1D, a numerical scheme for the simulations, and perform a numerical analysis devoted to trying to recover asymptotically patterns similar to those emerging for the equivalent particle systems, when particles originally evolved on a circle.
A second-order accurate immersed boundary-lattice Boltzmann method for particle-laden flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Qiang; Fan, Liang-Shih, E-mail: fan.1@osu.edu
A new immersed boundary-lattice Boltzmann method (IB-LBM) is presented for fully resolved simulations of incompressible viscous flows laden with rigid particles. The immersed boundary method (IBM) recently developed by Breugem (2012) [19] is adopted in the present method, development including the retraction technique, the multi-direct forcing method and the direct account of the inertia of the fluid contained within the particles. The present IB-LBM is, however, formulated with further improvement with the implementation of the high-order Runge–Kutta schemes in the coupled fluid–particle interaction. The major challenge to implement high-order Runge–Kutta schemes in the LBM is that the flow information suchmore » as density and velocity cannot be directly obtained at a fractional time step from the LBM since the LBM only provides the flow information at an integer time step. This challenge can be, however, overcome as given in the present IB-LBM by extrapolating the flow field around particles from the known flow field at the previous integer time step. The newly calculated fluid–particle interactions from the previous fractional time steps of the current integer time step are also accounted for in the extrapolation. The IB-LBM with high-order Runge–Kutta schemes developed in this study is validated by several benchmark applications. It is demonstrated, for the first time, that the IB-LBM has the capacity to resolve the translational and rotational motion of particles with the second-order accuracy. The optimal retraction distances for spheres and tubes that help the method achieve the second-order accuracy are found to be around 0.30 and −0.47 times of the lattice spacing, respectively. Simulations of the Stokes flow through a simple cubic lattice of rotational spheres indicate that the lift force produced by the Magnus effect can be very significant in view of the magnitude of the drag force when the practical rotating speed of the spheres is encountered. This finding may lead to more comprehensive studies of the effect of the particle rotation on fluid–solid drag laws. It is also demonstrated that, when the third-order or the fourth-order Runge–Kutta scheme is used, the numerical stability of the present IB-LBM is better than that of all methods in the literature, including the previous IB-LBMs and also the methods with the combination of the IBM and the traditional incompressible Navier–Stokes solver. - Highlights: • The IBM is embedded in the LBM using Runge–Kutta time schemes. • The effectiveness of the present IB-LBM is validated by benchmark applications. • For the first time, the IB-LBM achieves the second-order accuracy. • The numerical stability of the present IB-LBM is better than previous methods.« less
On the numerical dispersion of electromagnetic particle-in-cell code: Finite grid instability
NASA Astrophysics Data System (ADS)
Meyers, M. D.; Huang, C.-K.; Zeng, Y.; Yi, S. A.; Albright, B. J.
2015-09-01
The Particle-In-Cell (PIC) method is widely used in relativistic particle beam and laser plasma modeling. However, the PIC method exhibits numerical instabilities that can render unphysical simulation results or even destroy the simulation. For electromagnetic relativistic beam and plasma modeling, the most relevant numerical instabilities are the finite grid instability and the numerical Cherenkov instability. We review the numerical dispersion relation of the Electromagnetic PIC model. We rigorously derive the faithful 3-D numerical dispersion relation of the PIC model, for a simple, direct current deposition scheme, which does not conserve electric charge exactly. We then specialize to the Yee FDTD scheme. In particular, we clarify the presence of alias modes in an eigenmode analysis of the PIC model, which combines both discrete and continuous variables. The manner in which the PIC model updates and samples the fields and distribution function, together with the temporal and spatial phase factors from solving Maxwell's equations on the Yee grid with the leapfrog scheme, is explicitly accounted for. Numerical solutions to the electrostatic-like modes in the 1-D dispersion relation for a cold drifting plasma are obtained for parameters of interest. In the succeeding analysis, we investigate how the finite grid instability arises from the interaction of the numerical modes admitted in the system and their aliases. The most significant interaction is due critically to the correct representation of the operators in the dispersion relation. We obtain a simple analytic expression for the peak growth rate due to this interaction, which is then verified by simulation. We demonstrate that our analysis is readily extendable to charge conserving models.
On the numerical dispersion of electromagnetic particle-in-cell code: Finite grid instability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyers, M.D., E-mail: mdmeyers@physics.ucla.edu; Department of Physics and Astronomy, University of California Los Angeles, Los Angeles, CA 90095; Huang, C.-K., E-mail: huangck@lanl.gov
The Particle-In-Cell (PIC) method is widely used in relativistic particle beam and laser plasma modeling. However, the PIC method exhibits numerical instabilities that can render unphysical simulation results or even destroy the simulation. For electromagnetic relativistic beam and plasma modeling, the most relevant numerical instabilities are the finite grid instability and the numerical Cherenkov instability. We review the numerical dispersion relation of the Electromagnetic PIC model. We rigorously derive the faithful 3-D numerical dispersion relation of the PIC model, for a simple, direct current deposition scheme, which does not conserve electric charge exactly. We then specialize to the Yee FDTDmore » scheme. In particular, we clarify the presence of alias modes in an eigenmode analysis of the PIC model, which combines both discrete and continuous variables. The manner in which the PIC model updates and samples the fields and distribution function, together with the temporal and spatial phase factors from solving Maxwell's equations on the Yee grid with the leapfrog scheme, is explicitly accounted for. Numerical solutions to the electrostatic-like modes in the 1-D dispersion relation for a cold drifting plasma are obtained for parameters of interest. In the succeeding analysis, we investigate how the finite grid instability arises from the interaction of the numerical modes admitted in the system and their aliases. The most significant interaction is due critically to the correct representation of the operators in the dispersion relation. We obtain a simple analytic expression for the peak growth rate due to this interaction, which is then verified by simulation. We demonstrate that our analysis is readily extendable to charge conserving models.« less
Quantum simulation of an extra dimension.
Boada, O; Celi, A; Latorre, J I; Lewenstein, M
2012-03-30
We present a general strategy to simulate a D+1-dimensional quantum system using a D-dimensional one. We analyze in detail a feasible implementation of our scheme using optical lattice technology. The simplest nontrivial realization of a fourth dimension corresponds to the creation of a bi-volume geometry. We also propose single- and many-particle experimental signatures to detect the effects of the extra dimension.
Optimizing photophoresis and asymmetric force fields for grading of Brownian particles.
Neild, Adrian; Ng, Tuck Wah; Woods, Timothy
2009-12-10
We discuss a scheme that incorporates restricted spatial input location, orthogonal sort, and movement direction features, with particle sorting achieved by using an asymmetric potential cycled on and off, while movement is accomplished by photophoresis. Careful investigation has uncovered the odds of sorting between certain pairs of particle sizes to be solely dependent on radii in each phase of the process. This means that the most effective overall sorting can be achieved by maximizing the number of phases. This optimized approach is demonstrated using numerical simulation to permit grading of a range of nanometer-scale particle sizes.
Moving charged particles in lattice Boltzmann-based electrokinetics
NASA Astrophysics Data System (ADS)
Kuron, Michael; Rempfer, Georg; Schornbaum, Florian; Bauer, Martin; Godenschwager, Christian; Holm, Christian; de Graaf, Joost
2016-12-01
The motion of ionic solutes and charged particles under the influence of an electric field and the ensuing hydrodynamic flow of the underlying solvent is ubiquitous in aqueous colloidal suspensions. The physics of such systems is described by a coupled set of differential equations, along with boundary conditions, collectively referred to as the electrokinetic equations. Capuani et al. [J. Chem. Phys. 121, 973 (2004)] introduced a lattice-based method for solving this system of equations, which builds upon the lattice Boltzmann algorithm for the simulation of hydrodynamic flow and exploits computational locality. However, thus far, a description of how to incorporate moving boundary conditions into the Capuani scheme has been lacking. Moving boundary conditions are needed to simulate multiple arbitrarily moving colloids. In this paper, we detail how to introduce such a particle coupling scheme, based on an analogue to the moving boundary method for the pure lattice Boltzmann solver. The key ingredients in our method are mass and charge conservation for the solute species and a partial-volume smoothing of the solute fluxes to minimize discretization artifacts. We demonstrate our algorithm's effectiveness by simulating the electrophoresis of charged spheres in an external field; for a single sphere we compare to the equivalent electro-osmotic (co-moving) problem. Our method's efficiency and ease of implementation should prove beneficial to future simulations of the dynamics in a wide range of complex nanoscopic and colloidal systems that were previously inaccessible to lattice-based continuum algorithms.
NASA Astrophysics Data System (ADS)
Hosseinzadeh-Nik, Zahra; Regele, Jonathan D.
2015-11-01
Dense compressible particle-laden flow, which has a complex nature, exists in various engineering applications. Shock waves impacting a particle cloud is a canonical problem to investigate this type of flow. It has been demonstrated that large flow unsteadiness is generated inside the particle cloud from the flow induced by the shock passage. It is desirable to develop models for the Reynolds stress to capture the energy contained in vortical structures so that volume-averaged models with point particles can be simulated accurately. However, the previous work used Euler equations, which makes the prediction of vorticity generation and propagation innacurate. In this work, a fully resolved two dimensional (2D) simulation using the compressible Navier-Stokes equations with a volume penalization method to model the particles has been performed with the parallel adaptive wavelet-collocation method. The results still show large unsteadiness inside and downstream of the particle cloud. A 1D model is created for the unclosed terms based upon these 2D results. The 1D model uses a two-phase simple low dissipation AUSM scheme (TSLAU) developed by coupled with the compressible two phase kinetic energy equation.
Random walk, diffusion and mixing in simulations of scalar transport in fluid flows
NASA Astrophysics Data System (ADS)
Klimenko, A. Y.
2008-12-01
Physical similarity and mathematical equivalence of continuous diffusion and particle random walk form one of the cornerstones of modern physics and the theory of stochastic processes. In many applied models used in simulation of turbulent transport and turbulent combustion, mixing between particles is used to reflect the influence of the continuous diffusion terms in the transport equations. We show that the continuous scalar transport and diffusion can be accurately specified by means of mixing between randomly walking Lagrangian particles with scalar properties and assess errors associated with this scheme. This gives an alternative formulation for the stochastic process which is selected to represent the continuous diffusion. This paper focuses on statistical errors and deals with relatively simple cases, where one-particle distributions are sufficient for a complete description of the problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Kuo-Chuan; Liebendörfer, Matthias; Hempel, Matthias
2016-01-20
The neutrino mechanism of core-collapse supernova is investigated via non-relativistic, two-dimensional (2D), neutrino radiation–hydrodynamic simulations. For the transport of electron flavor neutrinos, we use the interaction rates defined by Bruenn and the isotropic diffusion source approximation (IDSA) scheme, which decomposes the transported particles into trapped-particle and streaming-particle components. Heavy neutrinos are described by a leakage scheme. Unlike the “ray-by-ray” approach in some other multidimensional supernova models, we use cylindrical coordinates and solve the trapped-particle component in multiple dimensions, improving the proto-neutron star resolution and the neutrino transport in angular and temporal directions. We provide an IDSA verification by performing one-dimensionalmore » (1D) and 2D simulations with 15 and 20 M{sub ⊙} progenitors from Woosley et al. and discuss the difference between our IDSA results and those existing in the literature. Additionally, we perform Newtonian 1D and 2D simulations from prebounce core collapse to several hundred milliseconds postbounce with 11, 15, 21, and 27 M{sub ⊙} progenitors from Woosley et al. with the HS(DD2) equation of state. General-relativistic effects are neglected. We obtain robust explosions with diagnostic energies E{sub dia} ≳ 0.1–0.5 B (1 B ≡ 10{sup 51} erg) for all considered 2D models within approximately 100–300 ms after bounce and find that explosions are mostly dominated by the neutrino-driven convection, although standing accretion shock instabilities are observed as well. We also find that the level of electron deleptonization during collapse dramatically affects the postbounce evolution, e.g., the neglect of neutrino–electron scattering during collapse will lead to a stronger explosion.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou Yu, E-mail: yzou@Princeton.ED; Kavousanakis, Michail E., E-mail: mkavousa@Princeton.ED; Kevrekidis, Ioannis G., E-mail: yannis@Princeton.ED
2010-07-20
The study of particle coagulation and sintering processes is important in a variety of research studies ranging from cell fusion and dust motion to aerosol formation applications. These processes are traditionally simulated using either Monte-Carlo methods or integro-differential equations for particle number density functions. In this paper, we present a computational technique for cases where we believe that accurate closed evolution equations for a finite number of moments of the density function exist in principle, but are not explicitly available. The so-called equation-free computational framework is then employed to numerically obtain the solution of these unavailable closed moment equations bymore » exploiting (through intelligent design of computational experiments) the corresponding fine-scale (here, Monte-Carlo) simulation. We illustrate the use of this method by accelerating the computation of evolving moments of uni- and bivariate particle coagulation and sintering through short simulation bursts of a constant-number Monte-Carlo scheme.« less
PHoToNs–A parallel heterogeneous and threads oriented code for cosmological N-body simulation
NASA Astrophysics Data System (ADS)
Wang, Qiao; Cao, Zong-Yan; Gao, Liang; Chi, Xue-Bin; Meng, Chen; Wang, Jie; Wang, Long
2018-06-01
We introduce a new code for cosmological simulations, PHoToNs, which incorporates features for performing massive cosmological simulations on heterogeneous high performance computer (HPC) systems and threads oriented programming. PHoToNs adopts a hybrid scheme to compute gravitational force, with the conventional Particle-Mesh (PM) algorithm to compute the long-range force, the Tree algorithm to compute the short range force and the direct summation Particle-Particle (PP) algorithm to compute gravity from very close particles. A self-similar space filling a Peano-Hilbert curve is used to decompose the computing domain. Threads programming is advantageously used to more flexibly manage the domain communication, PM calculation and synchronization, as well as Dual Tree Traversal on the CPU+MIC platform. PHoToNs scales well and efficiency of the PP kernel achieves 68.6% of peak performance on MIC and 74.4% on CPU platforms. We also test the accuracy of the code against the much used Gadget-2 in the community and found excellent agreement.
Tracking Simulation of Third-Integer Resonant Extraction for Fermilab's Mu2e Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Chong Shik; Amundson, James; Michelotti, Leo
2015-02-13
The Mu2e experiment at Fermilab requires acceleration and transport of intense proton beams in order to deliver stable, uniform particle spills to the production target. To meet the experimental requirement, particles will be extracted slowly from the Delivery Ring to the external beamline. Using Synergia2, we have performed multi-particle tracking simulations of third-integer resonant extraction in the Delivery Ring, including space charge effects, physical beamline elements, and apertures. A piecewise linear ramp profile of tune quadrupoles was used to maintain a constant averaged spill rate throughout extraction. To study and minimize beam losses, we implemented and introduced a number ofmore » features, beamline element apertures, and septum plane alignments. Additionally, the RF Knockout (RFKO) technique, which excites particles transversely, is employed for spill regulation. Combined with a feedback system, it assists in fine-tuning spill uniformity. Simulation studies were carried out to optimize the RFKO feedback scheme, which will be helpful in designing the final spill regulation system.« less
Laser-driven three-stage heavy-ion acceleration from relativistic laser-plasma interaction.
Wang, H Y; Lin, C; Liu, B; Sheng, Z M; Lu, H Y; Ma, W J; Bin, J H; Schreiber, J; He, X T; Chen, J E; Zepf, M; Yan, X Q
2014-01-01
A three-stage heavy ion acceleration scheme for generation of high-energy quasimonoenergetic heavy ion beams is investigated using two-dimensional particle-in-cell simulation and analytical modeling. The scheme is based on the interaction of an intense linearly polarized laser pulse with a compound two-layer target (a front heavy ion layer + a second light ion layer). We identify that, under appropriate conditions, the heavy ions preaccelerated by a two-stage acceleration process in the front layer can be injected into the light ion shock wave in the second layer for a further third-stage acceleration. These injected heavy ions are not influenced by the screening effect from the light ions, and an isolated high-energy heavy ion beam with relatively low-energy spread is thus formed. Two-dimensional particle-in-cell simulations show that ∼100MeV/u quasimonoenergetic Fe24+ beams can be obtained by linearly polarized laser pulses at intensities of 1.1×1021W/cm2.
1988-06-30
equation using finite difference methods. The distribution function is represented by a large number of particles. The particle’s velocities change as a...Small angle Coulomb collisions The FP equation for describing small angle Coulomb collisions can be solved numerically using finite difference techniques...A finite Fourrier transform (FT) is made in z, then we can solve for each k using the following finite difference scheme [5]: 2{r 1 +l1 2 (,,+ 1 - fj
Individual bioaerosol particle discrimination by multi-photon excited fluorescence.
Kiselev, Denis; Bonacina, Luigi; Wolf, Jean-Pierre
2011-11-21
Femtosecond laser induced multi-photon excited fluorescence (MPEF) from individual airborne particles is tested for the first time for discriminating bioaerosols. The fluorescence spectra, analysed in 32 channels, exhibit a composite character originating from simultaneous two-photon and three-photon excitation at 790 nm. Simulants of bacteria aggregates (clusters of dyed polystyrene microspheres) and different pollen particles (Ragweed, Pecan, Mulberry) are clearly discriminated by their MPEF spectra. This demonstration experiment opens the way to more sophisticated spectroscopic schemes like pump-probe and coherent control. © 2011 Optical Society of America
Correcting for particle counting bias error in turbulent flow
NASA Technical Reports Server (NTRS)
Edwards, R. V.; Baratuci, W.
1985-01-01
An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.
Stable schemes for dissipative particle dynamics with conserved energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoltz, Gabriel, E-mail: stoltz@cermics.enpc.fr
2017-07-01
This article presents a new numerical scheme for the discretization of dissipative particle dynamics with conserved energy. The key idea is to reduce elementary pairwise stochastic dynamics (either fluctuation/dissipation or thermal conduction) to effective single-variable dynamics, and to approximate the solution of these dynamics with one step of a Metropolis–Hastings algorithm. This ensures by construction that no negative internal energies are encountered during the simulation, and hence allows to increase the admissible timesteps to integrate the dynamics, even for systems with small heat capacities. Stability is only limited by the Hamiltonian part of the dynamics, which suggests resorting to multiplemore » timestep strategies where the stochastic part is integrated less frequently than the Hamiltonian one.« less
Dissipative particle dynamics: Systematic parametrization using water-octanol partition coefficients
NASA Astrophysics Data System (ADS)
Anderson, Richard L.; Bray, David J.; Ferrante, Andrea S.; Noro, Massimo G.; Stott, Ian P.; Warren, Patrick B.
2017-09-01
We present a systematic, top-down, thermodynamic parametrization scheme for dissipative particle dynamics (DPD) using water-octanol partition coefficients, supplemented by water-octanol phase equilibria and pure liquid phase density data. We demonstrate the feasibility of computing the required partition coefficients in DPD using brute-force simulation, within an adaptive semi-automatic staged optimization scheme. We test the methodology by fitting to experimental partition coefficient data for twenty one small molecules in five classes comprising alcohols and poly-alcohols, amines, ethers and simple aromatics, and alkanes (i.e., hexane). Finally, we illustrate the transferability of a subset of the determined parameters by calculating the critical micelle concentrations and mean aggregation numbers of selected alkyl ethoxylate surfactants, in good agreement with reported experimental values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwon, Kyung; Fan, Liang-Shih; Zhou, Qiang
A new and efficient direct numerical method with second-order convergence accuracy was developed for fully resolved simulations of incompressible viscous flows laden with rigid particles. The method combines the state-of-the-art immersed boundary method (IBM), the multi-direct forcing method, and the lattice Boltzmann method (LBM). First, the multi-direct forcing method is adopted in the improved IBM to better approximate the no-slip/no-penetration (ns/np) condition on the surface of particles. Second, a slight retraction of the Lagrangian grid from the surface towards the interior of particles with a fraction of the Eulerian grid spacing helps increase the convergence accuracy of the method. Anmore » over-relaxation technique in the procedure of multi-direct forcing method and the classical fourth order Runge-Kutta scheme in the coupled fluid-particle interaction were applied. The use of the classical fourth order Runge-Kutta scheme helps the overall IB-LBM achieve the second order accuracy and provides more accurate predictions of the translational and rotational motion of particles. The preexistent code with the first-order convergence rate is updated so that the updated new code can resolve the translational and rotational motion of particles with the second-order convergence rate. The updated code has been validated with several benchmark applications. The efficiency of IBM and thus the efficiency of IB-LBM were improved by reducing the number of the Lagragian markers on particles by using a new formula for the number of Lagrangian markers on particle surfaces. The immersed boundary-lattice Boltzmann method (IBLBM) has been shown to predict correctly the angular velocity of a particle. Prior to examining drag force exerted on a cluster of particles, the updated IB-LBM code along with the new formula for the number of Lagrangian markers has been further validated by solving several theoretical problems. Moreover, the unsteadiness of the drag force is examined when a fluid is accelerated from rest by a constant average pressure gradient toward a steady Stokes flow. The simulation results agree well with the theories for the short- and long-time behavior of the drag force. Flows through non-rotational and rotational spheres in simple cubic arrays and random arrays are simulated over the entire range of packing fractions, and both low and moderate particle Reynolds numbers to compare the simulated results with the literature results and develop a new drag force formula, a new lift force formula, and a new torque formula. Random arrays of solid particles in fluids are generated with Monte Carlo procedure and Zinchenko's method to avoid crystallization of solid particles over high solid volume fractions. A new drag force formula was developed with extensive simulated results to be closely applicable to real processes over the entire range of packing fractions and both low and moderate particle Reynolds numbers. The simulation results indicate that the drag force is barely affected by rotational Reynolds numbers. Drag force is basically unchanged as the angle of the rotating axis varies.« less
Laser cooling of molecular anions.
Yzombard, Pauline; Hamamda, Mehdi; Gerber, Sebastian; Doser, Michael; Comparat, Daniel
2015-05-29
We propose a scheme for laser cooling of negatively charged molecules. We briefly summarize the requirements for such laser cooling and we identify a number of potential candidates. A detailed computation study with C_{2}^{-}, the most studied molecular anion, is carried out. Simulations of 3D laser cooling in a gas phase show that this molecule could be cooled down to below 1 mK in only a few tens of milliseconds, using standard lasers. Sisyphus cooling, where no photodetachment process is present, as well as Doppler laser cooling of trapped C_{2}^{-}, are also simulated. This cooling scheme has an impact on the study of cold molecules, molecular anions, charged particle sources, and antimatter physics.
Uncertainty in aerosol hygroscopicity resulting from semi-volatile organic compounds
NASA Astrophysics Data System (ADS)
Goulden, Olivia; Crooks, Matthew; Connolly, Paul
2018-01-01
We present a novel method of exploring the effect of uncertainties in aerosol properties on cloud droplet number using existing cloud droplet activation parameterisations. Aerosol properties of a single involatile particle mode are randomly sampled within an uncertainty range and resulting maximum supersaturations and critical diameters calculated using the cloud droplet activation scheme. Hygroscopicity parameters are subsequently derived and the values of the mean and uncertainty are found to be comparable to experimental observations. A recently proposed cloud droplet activation scheme that includes the effects of co-condensation of semi-volatile organic compounds (SVOCs) onto a single lognormal mode of involatile particles is also considered. In addition to the uncertainties associated with the involatile particles, concentrations, volatility distributions and chemical composition of the SVOCs are randomly sampled and hygroscopicity parameters are derived using the cloud droplet activation scheme. The inclusion of SVOCs is found to have a significant effect on the hygroscopicity and contributes a large uncertainty. For non-volatile particles that are effective cloud condensation nuclei, the co-condensation of SVOCs reduces their actual hygroscopicity by approximately 25 %. A new concept of an effective hygroscopicity parameter is introduced that can computationally efficiently simulate the effect of SVOCs on cloud droplet number concentration without direct modelling of the organic compounds. These effective hygroscopicities can be as much as a factor of 2 higher than those of the non-volatile particles onto which the volatile organic compounds condense.
A second-order accurate immersed boundary-lattice Boltzmann method for particle-laden flows
NASA Astrophysics Data System (ADS)
Zhou, Qiang; Fan, Liang-Shih
2014-07-01
A new immersed boundary-lattice Boltzmann method (IB-LBM) is presented for fully resolved simulations of incompressible viscous flows laden with rigid particles. The immersed boundary method (IBM) recently developed by Breugem (2012) [19] is adopted in the present method, development including the retraction technique, the multi-direct forcing method and the direct account of the inertia of the fluid contained within the particles. The present IB-LBM is, however, formulated with further improvement with the implementation of the high-order Runge-Kutta schemes in the coupled fluid-particle interaction. The major challenge to implement high-order Runge-Kutta schemes in the LBM is that the flow information such as density and velocity cannot be directly obtained at a fractional time step from the LBM since the LBM only provides the flow information at an integer time step. This challenge can be, however, overcome as given in the present IB-LBM by extrapolating the flow field around particles from the known flow field at the previous integer time step. The newly calculated fluid-particle interactions from the previous fractional time steps of the current integer time step are also accounted for in the extrapolation. The IB-LBM with high-order Runge-Kutta schemes developed in this study is validated by several benchmark applications. It is demonstrated, for the first time, that the IB-LBM has the capacity to resolve the translational and rotational motion of particles with the second-order accuracy. The optimal retraction distances for spheres and tubes that help the method achieve the second-order accuracy are found to be around 0.30 and -0.47 times of the lattice spacing, respectively. Simulations of the Stokes flow through a simple cubic lattice of rotational spheres indicate that the lift force produced by the Magnus effect can be very significant in view of the magnitude of the drag force when the practical rotating speed of the spheres is encountered. This finding may lead to more comprehensive studies of the effect of the particle rotation on fluid-solid drag laws. It is also demonstrated that, when the third-order or the fourth-order Runge-Kutta scheme is used, the numerical stability of the present IB-LBM is better than that of all methods in the literature, including the previous IB-LBMs and also the methods with the combination of the IBM and the traditional incompressible Navier-Stokes solver.
Dynamic load balance scheme for the DSMC algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jin; Geng, Xiangren; Jiang, Dingwu
The direct simulation Monte Carlo (DSMC) algorithm, devised by Bird, has been used over a wide range of various rarified flow problems in the past 40 years. While the DSMC is suitable for the parallel implementation on powerful multi-processor architecture, it also introduces a large load imbalance across the processor array, even for small examples. The load imposed on a processor by a DSMC calculation is determined to a large extent by the total of simulator particles upon it. Since most flows are impulsively started with initial distribution of particles which is surely quite different from the steady state, themore » total of simulator particles will change dramatically. The load balance based upon an initial distribution of particles will break down as the steady state of flow is reached. The load imbalance and huge computational cost of DSMC has limited its application to rarefied or simple transitional flows. In this paper, by taking advantage of METIS, a software for partitioning unstructured graphs, and taking the total of simulator particles in each cell as a weight information, the repartitioning based upon the principle that each processor handles approximately the equal total of simulator particles has been achieved. The computation must pause several times to renew the total of simulator particles in each processor and repartition the whole domain again. Thus the load balance across the processors array holds in the duration of computation. The parallel efficiency can be improved effectively. The benchmark solution of a cylinder submerged in hypersonic flow has been simulated numerically. Besides, hypersonic flow past around a complex wing-body configuration has also been simulated. The results have displayed that, for both of cases, the computational time can be reduced by about 50%.« less
Multiscale simulation of molecular processes in cellular environments.
Chiricotto, Mara; Sterpone, Fabio; Derreumaux, Philippe; Melchionna, Simone
2016-11-13
We describe the recent advances in studying biological systems via multiscale simulations. Our scheme is based on a coarse-grained representation of the macromolecules and a mesoscopic description of the solvent. The dual technique handles particles, the aqueous solvent and their mutual exchange of forces resulting in a stable and accurate methodology allowing biosystems of unprecedented size to be simulated.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2016 The Author(s).
Modeling the solute transport by particle-tracing method with variable weights
NASA Astrophysics Data System (ADS)
Jiang, J.
2016-12-01
Particle-tracing method is usually used to simulate the solute transport in fracture media. In this method, the concentration at one point is proportional to number of particles visiting this point. However, this method is rather inefficient at the points with small concentration. Few particles visit these points, which leads to violent oscillation or gives zero value of concentration. In this paper, we proposed a particle-tracing method with variable weights. The concentration at one point is proportional to the sum of the weights of the particles visiting it. It adjusts the weight factors during simulations according to the estimated probabilities of corresponding walks. If the weight W of a tracking particle is larger than the relative concentration C at the corresponding site, the tracking particle will be splitted into Int(W/C) copies and each copy will be simulated independently with the weight W/Int(W/C) . If the weight W of a tracking particle is less than the relative concentration C at the corresponding site, the tracking particle will be continually tracked with a probability W/C and the weight will be adjusted to be C. By adjusting weights, the number of visiting particles distributes evenly in the whole range. Through this variable weights scheme, we can eliminate the violent oscillation and increase the accuracy of orders of magnitudes.
Power Allocation and Outage Probability Analysis for SDN-based Radio Access Networks
NASA Astrophysics Data System (ADS)
Zhao, Yongxu; Chen, Yueyun; Mai, Zhiyuan
2018-01-01
In this paper, performance of Access network Architecture based SDN (Software Defined Network) is analyzed with respect to the power allocation issue. A power allocation scheme PSO-PA (Particle Swarm Optimization-power allocation) algorithm is proposed, the proposed scheme is subjected to constant total power with the objective of minimizing system outage probability. The entire access network resource configuration is controlled by the SDN controller, then it sends the optimized power distribution factor to the base station source node (SN) and the relay node (RN). Simulation results show that the proposed scheme reduces the system outage probability at a low complexity.
Implementation of non-axisymmetric mesh system in the gyrokinetic PIC code (XGC) for Stellarators
NASA Astrophysics Data System (ADS)
Moritaka, Toseo; Hager, Robert; Cole, Micheal; Chang, Choong-Seock; Lazerson, Samuel; Ku, Seung-Hoe; Ishiguro, Seiji
2017-10-01
Gyrokinetic simulation is a powerful tool to investigate turbulent and neoclassical transports based on the first-principles of plasma kinetics. The gyrokinetic PIC code XGC has been developed for integrated simulations that cover the entire region of Tokamaks. Complicated field line and boundary structures should be taken into account to demonstrate edge plasma dynamics under the influence of X-point and vessel components. XGC employs gyrokinetic Poisson solver on unstructured triangle mesh to deal with this difficulty. We introduce numerical schemes newly developed for XGC simulation in non-axisymmetric Stellarator geometry. Triangle meshes in each poloidal plane are defined by PEST poloidal angle in the VMEC equilibrium so that they have the same regular structure in the straight field line coordinate. Electric charge of marker particle is distributed to the triangles specified by the field-following projection to the neighbor poloidal planes. 3D spline interpolation in a cylindrical mesh is also used to obtain equilibrium magnetic field at the particle position. These schemes capture the anisotropic plasma dynamics and resulting potential structure with high accuracy. The triangle meshes can smoothly connect to unstructured meshes in the edge region. We will present the validation test in the core region of Large Helical Device and discuss about future challenges toward edge simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosa, B., E-mail: bogdan.rosa@imgw.pl; Parishani, H.; Department of Earth System Science, University of California, Irvine, California 92697-3100
2015-01-15
In this paper, we study systematically the effects of forcing time scale in the large-scale stochastic forcing scheme of Eswaran and Pope [“An examination of forcing in direct numerical simulations of turbulence,” Comput. Fluids 16, 257 (1988)] on the simulated flow structures and statistics of forced turbulence. Using direct numerical simulations, we find that the forcing time scale affects the flow dissipation rate and flow Reynolds number. Other flow statistics can be predicted using the altered flow dissipation rate and flow Reynolds number, except when the forcing time scale is made unrealistically large to yield a Taylor microscale flow Reynoldsmore » number of 30 and less. We then study the effects of forcing time scale on the kinematic collision statistics of inertial particles. We show that the radial distribution function and the radial relative velocity may depend on the forcing time scale when it becomes comparable to the eddy turnover time. This dependence, however, can be largely explained in terms of altered flow Reynolds number and the changing range of flow length scales present in the turbulent flow. We argue that removing this dependence is important when studying the Reynolds number dependence of the turbulent collision statistics. The results are also compared to those based on a deterministic forcing scheme to better understand the role of large-scale forcing, relative to that of the small-scale turbulence, on turbulent collision of inertial particles. To further elucidate the correlation between the altered flow structures and dynamics of inertial particles, a conditional analysis has been performed, showing that the regions of higher collision rate of inertial particles are well correlated with the regions of lower vorticity. Regions of higher concentration of pairs at contact are found to be highly correlated with the region of high energy dissipation rate.« less
A moving mesh unstaggered constrained transport scheme for magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Mocz, Philip; Pakmor, Rüdiger; Springel, Volker; Vogelsberger, Mark; Marinacci, Federico; Hernquist, Lars
2016-11-01
We present a constrained transport (CT) algorithm for solving the 3D ideal magnetohydrodynamic (MHD) equations on a moving mesh, which maintains the divergence-free condition on the magnetic field to machine-precision. Our CT scheme uses an unstructured representation of the magnetic vector potential, making the numerical method simple and computationally efficient. The scheme is implemented in the moving mesh code AREPO. We demonstrate the performance of the approach with simulations of driven MHD turbulence, a magnetized disc galaxy, and a cosmological volume with primordial magnetic field. We compare the outcomes of these experiments to those obtained with a previously implemented Powell divergence-cleaning scheme. While CT and the Powell technique yield similar results in idealized test problems, some differences are seen in situations more representative of astrophysical flows. In the turbulence simulations, the Powell cleaning scheme artificially grows the mean magnetic field, while CT maintains this conserved quantity of ideal MHD. In the disc simulation, CT gives slower magnetic field growth rate and saturates to equipartition between the turbulent kinetic energy and magnetic energy, whereas Powell cleaning produces a dynamically dominant magnetic field. Such difference has been observed in adaptive-mesh refinement codes with CT and smoothed-particle hydrodynamics codes with divergence-cleaning. In the cosmological simulation, both approaches give similar magnetic amplification, but Powell exhibits more cell-level noise. CT methods in general are more accurate than divergence-cleaning techniques, and, when coupled to a moving mesh can exploit the advantages of automatic spatial/temporal adaptivity and reduced advection errors, allowing for improved astrophysical MHD simulations.
A divergence-cleaning scheme for cosmological SPMHD simulations
NASA Astrophysics Data System (ADS)
Stasyszyn, F. A.; Dolag, K.; Beck, A. M.
2013-01-01
In magnetohydrodynamics (MHD), the magnetic field is evolved by the induction equation and coupled to the gas dynamics by the Lorentz force. We perform numerical smoothed particle magnetohydrodynamics (SPMHD) simulations and study the influence of a numerical magnetic divergence. For instabilities arising from {nabla }\\cdot {boldsymbol B} related errors, we find the hyperbolic/parabolic cleaning scheme suggested by Dedner et al. to give good results and prevent numerical artefacts from growing. Additionally, we demonstrate that certain current SPMHD implementations of magnetic field regularizations give rise to unphysical instabilities in long-time simulations. We also find this effect when employing Euler potentials (divergenceless by definition), which are not able to follow the winding-up process of magnetic field lines properly. Furthermore, we present cosmological simulations of galaxy cluster formation at extremely high resolution including the evolution of magnetic fields. We show synthetic Faraday rotation maps and derive structure functions to compare them with observations. Comparing all the simulations with and without divergence cleaning, we are able to confirm the results of previous simulations performed with the standard implementation of MHD in SPMHD at normal resolution. However, at extremely high resolution, a cleaning scheme is needed to prevent the growth of numerical {nabla }\\cdot {boldsymbol B} errors at small scales.
Lagrangian Particle Tracking Simulation for Warm-Rain Processes in Quasi-One-Dimensional Domain
NASA Astrophysics Data System (ADS)
Kunishima, Y.; Onishi, R.
2017-12-01
Conventional cloud simulations are based on the Euler method and compute each microphysics process in a stochastic way assuming infinite numbers of particles within each numerical grid. They therefore cannot provide the Lagrangian statistics of individual particles in cloud microphysics (i.e., aerosol particles, cloud particles, and rain drops) nor discuss the statistical fluctuations due to finite number of particles. We here simulate the entire precipitation process of warm-rain, with tracking individual particles. We use the Lagrangian Cloud Simulator (LCS), which is based on the Euler-Lagrangian framework. In that framework, flow motion and scalar transportation are computed with the Euler method, and particle motion with the Lagrangian one. The LCS tracks particle motions and collision events individually with considering the hydrodynamic interaction between approaching particles with a superposition method, that is, it can directly represent the collisional growth of cloud particles. It is essential for trustworthy collision detection to take account of the hydrodynamic interaction. In this study, we newly developed a stochastic model based on the Twomey cloud condensation nuclei (CCN) activation for the Lagrangian tracking simulation and integrated it into the LCS. Coupling with the Euler computation for water vapour and temperature fields, the initiation and condensational growth of water droplets were computed in the Lagrangian way. We applied the integrated LCS for a kinematic simulation of warm-rain processes in a vertically-elongated domain of, at largest, 0.03×0.03×3000 (m3) with horizontal periodicity. Aerosol particles with a realistic number density, 5×107 (m3), were evenly distributed over the domain at the initial state. Prescribed updraft at the early stage initiated development of a precipitating cloud. We have confirmed that the obtained bulk statistics fairly agree with those from a conventional spectral-bin scheme for a vertical column domain. The centre of the discussion will be the Lagrangian statistics which is collected from the individual behaviour of the tracked particles.
NASA Astrophysics Data System (ADS)
Xiao, Hui; Yin, Yan; Jin, Lianji; Chen, Qian; Chen, Jinghua
2015-08-01
The Weather Research Forecast (WRF) mesoscale model coupled with a detailed bin microphysics scheme is used to investigate the impact of aerosol particles serving as cloud condensation nuclei and ice nuclei on orographic clouds and precipitation. A mixed-phase orographic cloud developed under two scenarios of aerosol (a typical continental background and a relatively polluted urban condition) and ice nuclei over an idealized mountain is simulated. The results show that, when the initial aerosol condition is changed from the relatively clean case to the polluted scenario, more droplets are activated, leading to a delay in precipitation, but the precipitation amount over the terrain is increased by about 10%. A detailed analysis of the microphysical processes indicates that ice-phase particles play an important role in cloud development, and their contribution to precipitation becomes more important with increasing aerosol particle concentrations. The growth of ice-phase particles through riming and Wegener-Bergeron-Findeisen regime is more effective under more polluted conditions, mainly due to the increased number of droplets with a diameter of 10-30 µm. Sensitivity tests also show that a tenfold increase in the concentration of ice crystals formed from ice nucleation leads to about 7% increase in precipitation, and the sensitivity of the precipitation to changes in the concentration and size distribution of aerosol particles is becoming less pronounced when the concentration of ice crystals is also increased.
Numerical Analysis of Dusty-Gas Flows
NASA Astrophysics Data System (ADS)
Saito, T.
2002-02-01
This paper presents the development of a numerical code for simulating unsteady dusty-gas flows including shock and rarefaction waves. The numerical results obtained for a shock tube problem are used for validating the accuracy and performance of the code. The code is then extended for simulating two-dimensional problems. Since the interactions between the gas and particle phases are calculated with the operator splitting technique, we can choose numerical schemes independently for the different phases. A semi-analytical method is developed for the dust phase, while the TVD scheme of Harten and Yee is chosen for the gas phase. Throughout this study, computations are carried out on SGI Origin2000, a parallel computer with multiple of RISC based processors. The efficient use of the parallel computer system is an important issue and the code implementation on Origin2000 is also described. Flow profiles of both the gas and solid particles behind the steady shock wave are calculated by integrating the steady conservation equations. The good agreement between the pseudo-stationary solutions and those from the current numerical code validates the numerical approach and the actual coding. The pseudo-stationary shock profiles can also be used as initial conditions of unsteady multidimensional simulations.
Direct Lagrangian tracking simulations of particles in vertically-developing atmospheric clouds
NASA Astrophysics Data System (ADS)
Onishi, Ryo; Kunishima, Yuichi
2017-11-01
We have been developing the Lagrangian Cloud Simulator (LCS), which follows the so-called Euler-Lagrangian framework, where flow motion and scalar transportations (i.e., temperature and humidity) are computed with the Euler method and particle motion with the Lagrangian method. The LCS simulation considers the hydrodynamic interaction between approaching particles for robust collision detection. This leads to reliable simulations of collision growth of cloud droplets. Recently the activation process, in which aerosol particles become tiny liquid droplets, has been implemented in the LCS. The present LCS can therefore consider the whole warm-rain precipitation processes -activation, condensation, collision and drop precipitation. In this talk, after briefly introducing the LCS, we will show kinematic simulations using the LCS for quasi-one dimensional domain, i.e., vertically elongated 3D domain. They are compared with one-dimensional kinematic simulations using a spectral-bin cloud microphysics scheme, which is based on the Euler method. The comparisons show fairly good agreement with small discrepancies, the source of which will be presented. The Lagrangian statistics, obtained for the first time for the vertical domain, will be the center of discussion. This research was supported by MEXT as ``Exploratory Challenge on Post-K computer'' (Frontiers of Basic Science: Challenging the Limits).
NASA Astrophysics Data System (ADS)
Keslake, Tim; Chipperfield, Martyn; Mann, Graham; Flemming, Johannes; Remy, Sam; Dhomse, Sandip; Morgan, Will
2016-04-01
The C-IFS (Composition Integrated Forecast System) developed under the MACC series of projects and to be continued under the Copernicus Atmospheric Monitoring System, provides global operational forecasts and re-analyses of atmospheric composition at high spatial resolution (T255, ~80km). Currently there are 2 aerosol schemes implemented within C-IFS, a mass-based scheme with externally mixed particle types and an aerosol microphysics scheme (GLOMAP-mode). The simpler mass-based scheme is the current operational system, also used in the existing system to assimilate satellite measurements of aerosol optical depth (AOD) for improved forecast capability. The microphysical GLOMAP scheme has now been implemented and evaluated in the latest C-IFS cycle alongside the mass-based scheme. The upgrade to the microphysical scheme provides for higher fidelity aerosol-radiation and aerosol-cloud interactions, accounting for global variations in size distribution and mixing state, and additional aerosol properties such as cloud condensation nuclei concentrations. The new scheme will also provide increased aerosol information when used as lateral boundary conditions for regional air quality models. Here we present a series of experiments highlighting the influence and accuracy of the two different aerosol schemes and the impact of MODIS AOD assimilation. In particular, we focus on the influence of biomass burning emissions on aerosol properties in the Amazon, comparing to ground-based and aircraft observations from the 2012 SAMBBA campaign. Biomass burning can affect regional air quality, human health, regional weather and the local energy budget. Tropical biomass burning generates particles primarily composed of particulate organic matter (POM) and black carbon (BC), the local ratio of these two different constituents often determining the properties and subsequent impacts of the aerosol particles. Therefore, the model's ability to capture the concentrations of these two carbonaceous aerosol types, during the tropical dry season, is essential for quantifying these wide ranging impacts. Comparisons to SAMBBA aircraft observations show that while both schemes underestimate POM and BC mass concentrations, the GLOMAP scheme provides a more accurate simulation. When satellite AOD is assimilated into the GEMS-AER scheme, the model is successfully adjusted, capturing observed mass concentrations to a good degree of accuracy.
Simulating incompressible flow on moving meshfree grids using General Finite Differences (GFD)
NASA Astrophysics Data System (ADS)
Vasyliv, Yaroslav; Alexeev, Alexander
2016-11-01
We simulate incompressible flow around an oscillating cylinder at different Reynolds numbers using General Finite Differences (GFD) on a meshfree grid. We evolve the meshfree grid by treating each grid node as a particle. To compute velocities and accelerations, we consider the particles at a particular instance as Eulerian observation points. The incompressible Navier-Stokes equations are directly discretized using GFD with boundary conditions enforced using a sharp interface treatment. Cloud sizes are set such that the local approximations use only 16 neighbors. To enforce incompressibility, we apply a semi-implicit approximate projection method. To prevent overlapping particles and formation of voids in the grid, we propose a particle regularization scheme based on a local minimization principle. We validate the GFD results for an oscillating cylinder against the lattice Boltzmann method and find good agreement. Financial support provided by National Science Foundation (NSF) Graduate Research Fellowship, Grant No. DGE-1148903.
The comet Halley meteoroid stream: just one more model
NASA Astrophysics Data System (ADS)
Ryabova, G. O.
2003-05-01
The present attempt to simulate the formation and evolution of the comet Halley meteoroid stream is based on a tentative physical model of dust ejection of large particles from comet Halley. Model streams consisting of 500-5000 test particles have been constructed according to the following ejection scheme. The particles are ejected from the nucleus along the cometary orbit (r < 9 au) within the sunward 70° cone, and the rate of ejection has been taken as proportional to r-4. Two kinds of spherical particles have been considered: 1 and 0.001 g with density equal to 0.25 g cm-3. Ejections have been simulated for 1404 BC, 141 AD and 837 AD. The equations of motion have been numerically integrated using the Everhart procedure. As a result, a complicated fine structure of the comet Halley meteoroid stream, consisting not of filaments but of layers, has been revealed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ovchinnikov, Mikhail; Ackerman, Andrew; Avramov, Alex
Large-eddy simulations of mixed-phase Arctic clouds by 11 different models are analyzed with the goal of improving understanding and model representation of processes controlling the evolution of these clouds. In a case based on observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC), it is found that ice number concentration, Ni, exerts significant influence on the cloud structure. Increasing Ni leads to a substantial reduction in liquid water path (LWP) and potential cloud dissipation, in agreement with earlier studies. By comparing simulations with the same microphysics coupled to different dynamical cores as well as the same dynamics coupled to differentmore » microphysics schemes, it is found that the ice water path (IWP) is mainly controlled by ice microphysics, while the inter-model differences in LWP are largely driven by physics and numerics of the dynamical cores. In contrast to previous intercomparisons, all models here use the same ice particle properties (i.e., mass-size, mass-fall speed, and mass-capacitance relationships) and a common radiation parameterization. The constrained setup exposes the importance of ice particle size distributions (PSD) in influencing cloud evolution. A clear separation in LWP and IWP predicted by models with bin and bulk microphysical treatments is documented and attributed primarily to the assumed shape of ice PSD used in bulk schemes. Compared to the bin schemes that explicitly predict the PSD, schemes assuming exponential ice PSD underestimate ice growth by vapor deposition and overestimate mass-weighted fall speed leading to an underprediction of IWP by a factor of two in the considered case.« less
The study of sound wave propagation in rarefied gases using unified gas-kinetic scheme
NASA Astrophysics Data System (ADS)
Wang, Rui-Jie; Xu, Kun
2012-08-01
Sound wave propagation in rarefied monatomic gases is simulated using a newly developed unified gaskinetic scheme (UGKS). The numerical calculations are carried out for a wide range of wave oscillating frequencies. The corresponding rarefaction parameter is defined as the ratio of sound wave frequency to the intermolecular particle collision frequency. The simulation covers the flow regime from the continuum to free molecule one. The treatment of the oscillating wall boundary condition and the methods for evaluating the absorption coefficient and sound wave speed are presented in detail. The simulation results from the UGKS are compared to the Navier-Stokes solutions, the direct simulation Monte Carlo (DSMC) simulation, and experimental measurements. Good agreement with the experimental data has been obtained in the whole flow regimes for the corresponding Knudsen number from 0.08 to 32. The current study clearly demonstrates the capability of the UGKS method in capturing the sound wave propagation and its usefulness for the rarefied flow study.
Protocol for fermionic positive-operator-valued measures
NASA Astrophysics Data System (ADS)
Arvidsson-Shukur, D. R. M.; Lepage, H. V.; Owen, E. T.; Ferrus, T.; Barnes, C. H. W.
2017-11-01
In this paper we present a protocol for the implementation of a positive-operator-valued measure (POVM) on massive fermionic qubits. We present methods for implementing nondispersive qubit transport, spin rotations, and spin polarizing beam-splitter operations. Our scheme attains linear opticslike control of the spatial extent of the qubits by considering ground-state electrons trapped in the minima of surface acoustic waves in semiconductor heterostructures. Furthermore, we numerically simulate a high-fidelity POVM that carries out Procrustean entanglement distillation in the framework of our scheme, using experimentally realistic potentials. Our protocol can be applied not only to pure ensembles with particle pairs of known identical entanglement, but also to realistic ensembles of particle pairs with a distribution of entanglement entropies. This paper provides an experimentally realizable design for future quantum technologies.
Modeling light scattering by mineral dust particles using spheroids
NASA Astrophysics Data System (ADS)
Merikallio, Sini; Nousiainen, Timo
Suspended dust particles have a considerable influence on light scattering in both terrestrial and planetary atmospheres and can therefore have a large effect on the interpretation of remote sensing measurements. Assuming dust particles to be spherical is known to produce inaccurate results when modeling optical properties of real mineral dust particles. Yet this approximation is widely used for its simplicity. Here, we simulate light scattering by mineral dust particles using a distribution of model spheroids. This is done by comparing scattering matrices calculated from a dust optical database of Dubovik et al. [2006] with those measured in the laboratory by Volten et al. [2001]. Wavelengths of 441,6 nm and 632,8 nm and refractive indexes of Re = 1.55 -1.7 and Im = 0.001i -0.01i were adopted in this study. Overall, spheroids are found to fit the measurements significantly better than Mie spheres. Further, we confirm that the shape distribution parametrization developed in Nousiainen et al. (2006) significantly improves the accuracy of simulated single-scattering for small mineral dust particles. The spheroid scheme should therefore yield more reliable interpretations of remote sensing data from dusty planetary atmospheres. While the spheroidal scheme is superior to spheres in remote sensing applications, its performance is far from perfect especially for samples with large particles. Thus, additional advances are clearly possible. Further studies of the Martian atmosphere are currently under way. Dubovik et al. (2006) Application of spheroid models to account for aerosol particle nonspheric-ity in remote sensing of desert dust, JGR, Vol. 111, D11208 Volten et al. (2001) Scattering matrices of mineral aerosol particles at 441.6 nm and 632.8 nm, JGR, Vol. 106, No. D15, pp. 17375-17401 Nousiainen et al. (2006) Light scattering modeling of small feldspar aerosol particles using polyhedral prisms and spheroids, JQSRT 101, pp. 471-487
Impact of Aerosol Processing on Orographic Clouds
NASA Astrophysics Data System (ADS)
Pousse-Nottelmann, Sara; Zubler, Elias M.; Lohmann, Ulrike
2010-05-01
Aerosol particles undergo significant modifications during their residence time in the atmosphere. Physical processes like coagulation, coating and water uptake, and aqueous surface chemistry alter the aerosol size distribution and composition. At this, clouds play a primary role as physical and chemical processing inside cloud droplets contributes considerably to the changes in aerosol particles. A previous study estimates that on global average atmospheric particles are cycled three times through a cloud before being removed from the atmosphere [1]. An explicit and detailed treatment of cloud-borne particles has been implemented in the regional weather forecast and climate model COSMO-CLM. The employed model version includes a two-moment cloud microphysical scheme [2] that has been coupled to the aerosol microphysical scheme M7 [3] as described by Muhlbauer and Lohmann, 2008 [4]. So far, the formation, transfer and removal of cloud-borne aerosol number and mass were not considered in the model. Following the parameterization for cloud-borne particles developed by Hoose et al., 2008 [5], distinction between in-droplet and in-crystal particles is made to more physically account for processes in mixed-phase clouds, such as the Wegener-Bergeron-Findeisen process and contact and immersion freezing. In our model, this approach has been extended to allow for aerosol particles in five different hydrometeors: cloud droplets, rain drops, ice crystals, snow flakes and graupel. We account for nucleation scavenging, freezing and melting processes, autoconversion, accretion, aggregation, riming and selfcollection, collisions between interstitial aerosol particles and hydrometeors, ice multiplication, sedimentation, evaporation and sublimation. The new scheme allows an evaluation of the cloud cycling of aerosol particles by tracking the particles even when scavenged into hydrometeors. Global simulations of aerosol processing in clouds have recently been conducted by Hoose et al. [6]. Our investigation regarding the influence of aerosol processing will focus on the regional scale using a cloud-system resolving model with a much higher resolution. Emphasis will be placed on orographic mixed-phase precipitation. Different two-dimensional simulations of idealized orographic clouds will be conducted to estimate the effect of aerosol processing on orographic cloud formation and precipitation. Here, cloud lifetime, location and extent as well as the cloud type will be of particular interest. In a supplementary study, the new parameterization will be compared to observations of total and interstitial aerosol concentrations and size distribution at the remote high alpine research station Jungfraujoch in Switzerland. In addition, our simulations will be compared to recent simulations of aerosol processing in warm, mixed-phase and cold clouds, which have been carried out at the location of Jungfraujoch station [5]. References: [1] Pruppacher & Jaenicke (1995), The processing of water vapor and aerosols by atmospheric clouds, a global estimate, Atmos. Res., 38, 283295. [2] Seifert & Beheng (2006), A two-moment microphysics parameterization for mixed-phase clouds. Part 1: Model description, Meteorol. Atmos. Phys., 92, 4566. [3] Vignati et al. (2004), An efficient size-resolved aerosol microphysics module for large-scale transport models, J. Geophys. Res., 109, D22202 [4] Muhlbauer & Lohmann (2008), Sensitivity studies of the role of aerosols in warm-phase orographic precipitation in different flow regimes, J. Atmos. Sci., 65, 25222542. [5] Hoose et al. (2008), Aerosol processing in mixed-phase clouds in ECHAM5HAM: Model description and comparison to observations, J. Geophys. Res., 113, D071210. [6] Hoose et al. (2008), Global simulations of aerosol processing in clouds, Atmos. Chem. Phys., 8, 69396963.
Study of premixing phase of steam explosion with JASMINE code in ALPHA program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moriyama, Kiyofumi; Yamano, Norihiro; Maruyama, Yu
Premixing phase of steam explosion has been studied in ALPHA Program at Japan Atomic Energy Research Institute (JAERI). An analytical model to simulate the premixing phase, JASMINE (JAERI Simulator for Multiphase Interaction and Explosion), has been developed based on a multi-dimensional multi-phase thermal hydraulics code MISTRAL (by Fuji Research Institute Co.). The original code was extended to simulate the physics in the premixing phenomena. The first stage of the code validation was performed by analyzing two mixing experiments with solid particles and water: the isothermal experiment by Gilbertson et al. (1992) and the hot particle experiment by Angelini et al.more » (1993) (MAGICO). The code predicted reasonably well the experiments. Effectiveness of the TVD scheme employed in the code was also demonstrated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valdarnini, R., E-mail: valda@sissa.it
In this paper, we present results from a series of hydrodynamical tests aimed at validating the performance of a smoothed particle hydrodynamics (SPH) formulation in which gradients are derived from an integral approach. We specifically investigate the code behavior with subsonic flows, where it is well known that zeroth-order inconsistencies present in standard SPH make it particularly problematic to correctly model the fluid dynamics. In particular, we consider the Gresho–Chan vortex problem, the growth of Kelvin–Helmholtz instabilities, the statistics of driven subsonic turbulence and the cold Keplerian disk problem. We compare simulation results for the different tests with those obtained,more » for the same initial conditions, using standard SPH. We also compare the results with the corresponding ones obtained previously with other numerical methods, such as codes based on a moving-mesh scheme or Godunov-type Lagrangian meshless methods. We quantify code performances by introducing error norms and spectral properties of the particle distribution, in a way similar to what was done in other works. We find that the new SPH formulation exhibits strongly reduced gradient errors and outperforms standard SPH in all of the tests considered. In fact, in terms of accuracy, we find good agreement between the simulation results of the new scheme and those produced using other recently proposed numerical schemes. These findings suggest that the proposed method can be successfully applied for many astrophysical problems in which the presence of subsonic flows previously limited the use of SPH, with the new scheme now being competitive in these regimes with other numerical methods.« less
AN ADVANCED LEAKAGE SCHEME FOR NEUTRINO TREATMENT IN ASTROPHYSICAL SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perego, A.; Cabezón, R. M.; Käppeli, R., E-mail: albino.perego@physik.tu-darmstadt.de
We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmannmore » transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.« less
NASA Astrophysics Data System (ADS)
Arabas, S.; Jaruga, A.; Pawlowska, H.; Grabowski, W. W.
2012-12-01
Clouds may influence aerosol characteristics of their environment. The relevant processes include wet deposition (rainout or washout) and cloud condensation nuclei (CCN) recycling through evaporation of cloud droplets and drizzle drops. Recycled CCN physicochemical properties may be altered if the evaporated droplets go through collisional growth or irreversible chemical reactions (e.g. SO2 oxidation). The key challenge of representing these processes in a numerical cloud model stems from the need to track properties of activated CCN throughout the cloud lifecycle. Lack of such "memory" characterises the so-called bulk, multi-moment as well as bin representations of cloud microphysics. In this study we apply the particle-based scheme of Shima et al. 2009. Each modelled particle (aka super-droplet) is a numerical proxy for a multiplicity of real-world CCN, cloud, drizzle or rain particles of the same size, nucleus type,and position. Tracking cloud nucleus properties is an inherent feature of the particle-based frameworks, making them suitable for studying aerosol-cloud-aerosol interactions. The super-droplet scheme is furthermore characterized by linear scalability in the number of computational particles, and no numerical diffusion in the condensational and in the Monte-Carlo type collisional growth schemes. The presentation will focus on processing of aerosol by a drizzling stratocumulus deck. The simulations are carried out using a 2D kinematic framework and a VOCALS experiment inspired set-up (see http://www.rap.ucar.edu/~gthompsn/workshop2012/case1/).
Numerical Modeling of Suspension HVOF Spray
NASA Astrophysics Data System (ADS)
Jadidi, M.; Moghtadernejad, S.; Dolatabadi, A.
2016-02-01
A three-dimensional two-way coupled Eulerian-Lagrangian scheme is used to simulate suspension high-velocity oxy-fuel spraying process. The mass, momentum, energy, and species equations are solved together with the realizable k-ɛ turbulence model to simulate the gas phase. Suspension is assumed to be a mixture of solid particles [mullite powder (3Al2O3·2SiO2)], ethanol, and ethylene glycol. The process involves premixed combustion of oxygen-propylene, and non-premixed combustion of oxygen-ethanol and oxygen-ethylene glycol. One-step global reaction is used for each mentioned reaction together with eddy dissipation model to compute the reaction rate. To simulate the droplet breakup, Taylor Analogy Breakup model is applied. After the completion of droplet breakup, and solvent evaporation/combustion, the solid suspended particles are tracked through the domain to determine the characteristics of the coating particles. Numerical simulations are validated against the experimental results in the literature for the same operating conditions. Seven or possibly eight shock diamonds are captured outside the nozzle. In addition, a good agreement between the predicted particle temperature, velocity, and diameter, and the experiment is obtained. It is shown that as the standoff distance increases, the particle temperature and velocity reduce. Furthermore, a correlation is proposed to determine the spray cross-sectional diameter and estimate the particle trajectories as a function of standoff distance.
Statistical Analysis for Collision-free Boson Sampling.
Huang, He-Liang; Zhong, Han-Sen; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Wang, Xiang; Bao, Wan-Su
2017-11-10
Boson sampling is strongly believed to be intractable for classical computers but solvable with photons in linear optics, which raises widespread concern as a rapid way to demonstrate the quantum supremacy. However, due to its solution is mathematically unverifiable, how to certify the experimental results becomes a major difficulty in the boson sampling experiment. Here, we develop a statistical analysis scheme to experimentally certify the collision-free boson sampling. Numerical simulations are performed to show the feasibility and practicability of our scheme, and the effects of realistic experimental conditions are also considered, demonstrating that our proposed scheme is experimentally friendly. Moreover, our broad approach is expected to be generally applied to investigate multi-particle coherent dynamics beyond the boson sampling.
Longitudinal phase-space coating of beam in a storage ring
NASA Astrophysics Data System (ADS)
Bhat, C. M.
2014-06-01
In this Letter, I report on a novel scheme for beam stacking without any beam emittance dilution using a barrier rf system in synchrotrons. The general principle of the scheme called longitudinal phase-space coating, validation of the concept via multi-particle beam dynamics simulations applied to the Fermilab Recycler, and its experimental demonstration are presented. In addition, it has been shown and illustrated that the rf gymnastics involved in this scheme can be used in measuring the incoherent synchrotron tune spectrum of the beam in barrier buckets and in producing a clean hollow beam in longitudinal phase space. The method of beam stacking in synchrotrons presented here is the first of its kind.
NASA Technical Reports Server (NTRS)
Han, Mei; Braun, Scott A.; Olson, William S.; Persson, P. Ola G.; Bao, Jian-Wen
2009-01-01
Seen by the human eye, precipitation particles are commonly drops of rain, flakes of snow, or lumps of hail that reach the ground. Remote sensors and numerical models usually deal with information about large collections of rain, snow, and hail (or graupel --also called soft hail ) in a volume of air. Therefore, the size and number of the precipitation particles and how particles interact, evolve, and fall within the volume of air need to be represented using physical laws and mathematical tools, which are often implemented as cloud and precipitation microphysical parameterizations in numerical models. To account for the complexity of the precipitation physical processes, scientists have developed various types of such schemes in models. The accuracy of numerical weather forecasting may vary dramatically when different types of these schemes are employed. Therefore, systematic evaluations of cloud and precipitation schemes are of great importance for improvement of weather forecasts. This study is one such endeavor; it pursues quantitative assessment of all the available cloud and precipitation microphysical schemes in a weather model (MM5) through comparison with the observations obtained by National Aeronautics and Space Administration (NASA) s and Japan Aerospace Exploration Agency (JAXA) s Tropical Rainfall Measuring Mission (TRMM) precipitation radar (PR) and microwave imager (TMI). When satellite sensors (like PR or TMI) detect information from precipitation particles, they cannot directly observe the microphysical quantities (e.g., water species phase, density, size, and amount etc.). Instead, they tell how much radiation is absorbed by rain, reflected away from the sensor by snow or graupel, or reflected back to the satellite. On the other hand, the microphysical quantities in the model are usually well represented in microphysical schemes and can be converted to radiative properties that can be directly compared to the corresponding PR and TMI observations. This study employs this method to evaluate the accuracy of the simulated radiative properties by the MM5 model with different microphysical schemes. It is found that the representations of particle density, size, and mass in the different schemes in the MM5 model determine the model s performance when predicting a winter storm over the eastern Pacific Ocean. Schemes lacking moderate density particles (i.e. graupel), with snow flakes that are too large, or with excessive mass of snow or graupel lead to degraded prediction of the radiative properties as observed by the TRMM satellite. This study demonstrates the uniqueness of the combination of both an active microwave sensor (PR) and passive microwave sensor (TMI) onboard TRMM on assessing the accuracy of numerical weather forecasting. It improves our understanding of the physical and radiative properties of different types of precipitation particles and provides suggestions for better representation of cloud and precipitation processes in numerical models. It would, ultimately, contribute to answering questions like "Why did it not rain when the forecast says it would?"
Finite-β Split-weight Gyrokinetic Particle Simulation of Microinstabilities
NASA Astrophysics Data System (ADS)
Jenkins, Thomas G.; Lee, W. W.; Lewandowski, J. L. V.
2003-10-01
The finite-β split-weight gyrokinetic particle simulation scheme [1] has been implemented in two-dimensional slab geometry for the purpose of studying the effects of high temperature electrons on microinstabilities. Drift wave instabilities and ion temperature gradient modes are studied in both shearless slab and sheared slab geometries. The linear and nonlinear evolution of these modes, as well as the physics of microtearing, is compared with the results of Reynders [2] and Cummings [3]. [1] W. W. Lee, J. L. V. Lewandowski, T. S. Hahm, and Z. Lin, Phys. Plasmas 8, 4435 (2001). [2] J. V. W. Reynders, Ph.D. thesis, Princeton University (1992). [3] J. C. Cummings, Ph.D. thesis, Princeton University (1995).
An efficient (t,n) threshold quantum secret sharing without entanglement
NASA Astrophysics Data System (ADS)
Qin, Huawang; Dai, Yuewei
2016-04-01
An efficient (t,n) threshold quantum secret sharing (QSS) scheme is proposed. In our scheme, the Hash function is used to check the eavesdropping, and no particles need to be published. So the utilization efficiency of the particles is real 100%. No entanglement is used in our scheme. The dealer uses the single particles to encode the secret information, and the participants get the secret through measuring the single particles. Compared to the existing schemes, our scheme is simpler and more efficient.
Swarming behavior of gradient-responsive Brownian particles in a porous medium.
Grančič, Peter; Štěpánek, František
2012-07-01
Active targeting by Brownian particles in a fluid-filled porous environment is investigated by computer simulation. The random motion of the particles is enhanced by diffusiophoresis with respect to concentration gradients of chemical signals released by the particles in the proximity of a target. The mathematical model, based on a combination of the Brownian dynamics method and a diffusion problem is formulated in terms of key parameters that include the particle diffusiophoretic mobility and the signaling threshold (the distance from the target at which the particles release their chemical signals). The results demonstrate that even a relatively simple chemical signaling scheme can lead to a complex collective behavior of the particles and can be a very efficient way of guiding a swarm of Brownian particles towards a target, similarly to the way colonies of living cells communicate via secondary messengers.
NASA Astrophysics Data System (ADS)
López-López, J. M.; Moncho-Jordá, A.; Schmitt, A.; Hidalgo-Álvarez, R.
2005-09-01
Binary diffusion-limited cluster-cluster aggregation processes are studied as a function of the relative concentration of the two species. Both, short and long time behaviors are investigated by means of three-dimensional off-lattice Brownian Dynamics simulations. At short aggregation times, the validity of the Hogg-Healy-Fuerstenau approximation is shown. At long times, a single large cluster containing all initial particles is found to be formed when the relative concentration of the minority particles lies above a critical value. Below that value, stable aggregates remain in the system. These stable aggregates are composed by a few minority particles that are highly covered by majority ones. Our off-lattice simulations reveal a value of approximately 0.15 for the critical relative concentration. A qualitative explanation scheme for the formation and growth of the stable aggregates is developed. The simulations also explain the phenomenon of monomer discrimination that was observed recently in single cluster light scattering experiments.
Verification of nonlinear particle simulation of radio frequency waves in fusion plasmas
NASA Astrophysics Data System (ADS)
Kuley, Animesh; Bao, Jian; Lin, Zhihong
2015-11-01
Nonlinear global particle simulation model has been developed in GTC to study the nonlinear interactions of radio frequency (RF) waves with plasmas in tokamak. In this model, ions are considered as fully kinetic particles using the Vlasov equation and electrons are treated as guiding centers using the drift kinetic. Boris push scheme for the ion motion has been implemented in the toroidal geometry using magnetic coordinates and successfully verified for the ion cyclotron, ion Bernstein and lower hybrid waves. The nonlinear GTC simulation of the lower hybrid wave shows that the amplitude of the electrostatic potential is oscillatory due to the trapping of resonant electrons by the electric field of the lower hybrid wave. The nonresonant parametric decay is observed an IBW sideband and an ion cyclotron quasimode (ICQM). The ICQM induces an ion perpendicular heating with a heating rate proportional to the pump wave intensity. This work is supported by PPPL subcontract number S013849-F and US Department of Energy (DOE) SciDAC GSEP Program.
Qin, Nan; Pinto, Marco; Tian, Zhen; Dedes, Georgios; Pompos, Arnold; Jiang, Steve B.; Parodi, Katia; Jia, Xun
2017-01-01
Monte Carlo (MC) simulation is considered as the most accurate method for calculation of absorbed dose and fundamental physics quantities related to biological effects in carbon ion therapy. To improve its computational efficiency, we have developed a GPU-oriented fast MC package named goCMC, for carbon therapy. goCMC simulates particle transport in voxelized geometry with kinetic energy up to 450 MeV/u. Class II condensed history simulation scheme with a continuous slowing down approximation was employed. Energy straggling and multiple scattering were modeled. δ-electrons were terminated with their energy locally deposited. Four types of nuclear interactions were implemented in goCMC, i.e., carbon-hydrogen, carbon-carbon, carbon-oxygen and carbon-calcium inelastic collisions. Total cross section data from Geant4 were used. Secondary particles produced in these interactions were sampled according to particle yield with energy and directional distribution data derived from Geant4 simulation results. Secondary charged particles were transported following the condensed history scheme, whereas secondary neutral particles were ignored. goCMC was developed under OpenCL framework and is executable on different platforms, e.g. GPU and multi-core CPU. We have validated goCMC with Geant4 in cases with different beam energy and phantoms including four homogeneous phantoms, one heterogeneous half-slab phantom, and one patient case. For each case 3 × 107 carbon ions were simulated, such that in the region with dose greater than 10% of maximum dose, the mean relative statistical uncertainty was less than 1%. Good agreements for dose distributions and range estimations between goCMC and Geant4 were observed. 3D gamma passing rates with 1%/1 mm criterion were over 90% within 10%) isodose line except in two extreme cases, and those with 2%/1 mm criterion were all over 96%. Efficiency and code portability were tested with different GPUs and CPUs. Depending on the beam energy and voxel size, the computation time to simulate 107 carbons was 9.9–125 sec, 2.5–50 sec and 60–612 sec on an AMD Radeon GPU card, an NVidia GeForce GTX 1080 GPU card and an Intel Xeon E5-2640 CPU, respectively. The combined accuracy, efficiency and portability make goCMC attractive for research and clinical applications in carbon ion therapy. PMID:28140352
NASA Astrophysics Data System (ADS)
Qin, Nan; Pinto, Marco; Tian, Zhen; Dedes, Georgios; Pompos, Arnold; Jiang, Steve B.; Parodi, Katia; Jia, Xun
2017-05-01
Monte Carlo (MC) simulation is considered as the most accurate method for calculation of absorbed dose and fundamental physics quantities related to biological effects in carbon ion therapy. To improve its computational efficiency, we have developed a GPU-oriented fast MC package named goCMC, for carbon therapy. goCMC simulates particle transport in voxelized geometry with kinetic energy up to 450 MeV u-1. Class II condensed history simulation scheme with a continuous slowing down approximation was employed. Energy straggling and multiple scattering were modeled. δ-electrons were terminated with their energy locally deposited. Four types of nuclear interactions were implemented in goCMC, i.e. carbon-hydrogen, carbon-carbon, carbon-oxygen and carbon-calcium inelastic collisions. Total cross section data from Geant4 were used. Secondary particles produced in these interactions were sampled according to particle yield with energy and directional distribution data derived from Geant4 simulation results. Secondary charged particles were transported following the condensed history scheme, whereas secondary neutral particles were ignored. goCMC was developed under OpenCL framework and is executable on different platforms, e.g. GPU and multi-core CPU. We have validated goCMC with Geant4 in cases with different beam energy and phantoms including four homogeneous phantoms, one heterogeneous half-slab phantom, and one patient case. For each case 3× {{10}7} carbon ions were simulated, such that in the region with dose greater than 10% of maximum dose, the mean relative statistical uncertainty was less than 1%. Good agreements for dose distributions and range estimations between goCMC and Geant4 were observed. 3D gamma passing rates with 1%/1 mm criterion were over 90% within 10% isodose line except in two extreme cases, and those with 2%/1 mm criterion were all over 96%. Efficiency and code portability were tested with different GPUs and CPUs. Depending on the beam energy and voxel size, the computation time to simulate {{10}7} carbons was 9.9-125 s, 2.5-50 s and 60-612 s on an AMD Radeon GPU card, an NVidia GeForce GTX 1080 GPU card and an Intel Xeon E5-2640 CPU, respectively. The combined accuracy, efficiency and portability make goCMC attractive for research and clinical applications in carbon ion therapy.
Qin, Nan; Pinto, Marco; Tian, Zhen; Dedes, Georgios; Pompos, Arnold; Jiang, Steve B; Parodi, Katia; Jia, Xun
2017-05-07
Monte Carlo (MC) simulation is considered as the most accurate method for calculation of absorbed dose and fundamental physics quantities related to biological effects in carbon ion therapy. To improve its computational efficiency, we have developed a GPU-oriented fast MC package named goCMC, for carbon therapy. goCMC simulates particle transport in voxelized geometry with kinetic energy up to 450 MeV u -1 . Class II condensed history simulation scheme with a continuous slowing down approximation was employed. Energy straggling and multiple scattering were modeled. δ-electrons were terminated with their energy locally deposited. Four types of nuclear interactions were implemented in goCMC, i.e. carbon-hydrogen, carbon-carbon, carbon-oxygen and carbon-calcium inelastic collisions. Total cross section data from Geant4 were used. Secondary particles produced in these interactions were sampled according to particle yield with energy and directional distribution data derived from Geant4 simulation results. Secondary charged particles were transported following the condensed history scheme, whereas secondary neutral particles were ignored. goCMC was developed under OpenCL framework and is executable on different platforms, e.g. GPU and multi-core CPU. We have validated goCMC with Geant4 in cases with different beam energy and phantoms including four homogeneous phantoms, one heterogeneous half-slab phantom, and one patient case. For each case [Formula: see text] carbon ions were simulated, such that in the region with dose greater than 10% of maximum dose, the mean relative statistical uncertainty was less than 1%. Good agreements for dose distributions and range estimations between goCMC and Geant4 were observed. 3D gamma passing rates with 1%/1 mm criterion were over 90% within 10% isodose line except in two extreme cases, and those with 2%/1 mm criterion were all over 96%. Efficiency and code portability were tested with different GPUs and CPUs. Depending on the beam energy and voxel size, the computation time to simulate [Formula: see text] carbons was 9.9-125 s, 2.5-50 s and 60-612 s on an AMD Radeon GPU card, an NVidia GeForce GTX 1080 GPU card and an Intel Xeon E5-2640 CPU, respectively. The combined accuracy, efficiency and portability make goCMC attractive for research and clinical applications in carbon ion therapy.
Fluctuations, noise, and numerical methods in gyrokinetic particle-in-cell simulations
NASA Astrophysics Data System (ADS)
Jenkins, Thomas Grant
In this thesis, the role of the "marker weight" (or "particle weight") used in gyrokinetic particle-in-cell (PIC) simulations is explored. Following a review of the foundations and major developments of gyrokinetic theory, key concepts of the Monte Carlo methods which form the basis for PIC simulations are set forth. Consistent with these methods, a Klimontovich representation for the set of simulation markers is developed in the extended phase space {R, v||, v ⊥, W, P} (with the additional coordinates representing weight fields); clear distinctions are consequently established between the marker distribution function and various physical distribution functions (arising from diverse moments of the marker distribution). Equations describing transport in the simulation are shown to be easily derivable using the formalism. The necessity of a two-weight model for nonequilibrium simulations is demonstrated, and a simple method for calculating the second (background-related) weight is presented. Procedures for arbitrary marker loading schemes in gyrokinetic PIC simulations are outlined; various initialization methods for simulations are compared. Possible effects of inadequate velocity-space resolution in gyrokinetic continuum simulations are explored. The "partial-f" simulation method is developed and its limitations indicated. A quasilinear treatment of electrostatic drift waves is shown to correctly predict nonlinear saturation amplitudes, and the relevance of the gyrokinetic fluctuation-dissipation theorem in assessing the effects of discrete-marker-induced statistical noise on the resulting marginally stable states is demonstrated.
Aberration correction for charged particle lithography
NASA Astrophysics Data System (ADS)
Munro, Eric; Zhu, Xieqing; Rouse, John A.; Liu, Haoning
2001-12-01
At present, the throughput of projection-type charge particle lithography systems, such as PREVAIL and SCALPEL, is limited primarily by the combined effects of field curvature in the projection lenses and Coulomb interaction in the particle beam. These are fundamental physical limitations, inherent in charged particle optics, so there seems little scope for significantly improving the design of such systems, using conventional rotationally symmetric electron lenses. This paper explores the possibility of overcoming the field aberrations of round electron lense, by using a novel aberration corrector, proposed by Professor H. Rose of University of Darmstadt, called a hexapole planator. In this scheme, a set of round lenses is first used to simultaneously correct distortion and coma. The hexapole planator is then used to correct the field curvature and astigmatism, and to create a negative spherical aberration. The size of the transfer lenses around the planator can then be adjusted to zero the residual spherical aberration. In a way, an electron optical projection system is obtained that is free of all primary geometrical aberrations. In this paper, the feasibility of this concept has been studied with a computer simulation. The simulations verify that this scheme can indeed work, for both electrostatic and magnetic projection systems. Two design studies have been carried out. The first is for an electrostatic system that could be used for ion beam lithography, and the second is for a magnetic projection system for electron beam lithography. In both cases, designs have been achieved in which all primary third-order geometrical aberrations are totally eliminated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soria, José, E-mail: jose.soria@probien.gob.ar; Gauthier, Daniel; Flamant, Gilles
2015-09-15
Highlights: • A CFD two-scale model is formulated to simulate heavy metal vaporization from waste incineration in fluidized beds. • MSW particle is modelled with the macroscopic particle model. • Influence of bed dynamics on HM vaporization is included. • CFD predicted results agree well with experimental data reported in literature. • This approach may be helpful for fluidized bed reactor modelling purposes. - Abstract: Municipal Solid Waste Incineration (MSWI) in fluidized bed is a very interesting technology mainly due to high combustion efficiency, great flexibility for treating several types of waste fuels and reduction in pollutants emitted with themore » flue gas. However, there is a great concern with respect to the fate of heavy metals (HM) contained in MSW and their environmental impact. In this study, a coupled two-scale CFD model was developed for MSWI in a bubbling fluidized bed. It presents an original scheme that combines a single particle model and a global fluidized bed model in order to represent the HM vaporization during MSW combustion. Two of the most representative HM (Cd and Pb) with bed temperatures ranging between 923 and 1073 K have been considered. This new approach uses ANSYS FLUENT 14.0 as the modelling platform for the simulations along with a complete set of self-developed user-defined functions (UDFs). The simulation results are compared to the experimental data obtained previously by the research group in a lab-scale fluid bed incinerator. The comparison indicates that the proposed CFD model predicts well the evolution of the HM release for the bed temperatures analyzed. It shows that both bed temperature and bed dynamics have influence on the HM vaporization rate. It can be concluded that CFD is a rigorous tool that provides valuable information about HM vaporization and that the original two-scale simulation scheme adopted allows to better represent the actual particle behavior in a fluid bed incinerator.« less
NASA Astrophysics Data System (ADS)
Cholakian, Arineh; Beekmann, Matthias; Colette, Augustin; Coll, Isabelle; Siour, Guillaume; Sciare, Jean; Marchand, Nicolas; Couvidat, Florian; Pey, Jorge; Gros, Valerie; Sauvage, Stéphane; Michoud, Vincent; Sellegri, Karine; Colomb, Aurélie; Sartelet, Karine; Langley DeWitt, Helen; Elser, Miriam; Prévot, André S. H.; Szidat, Sonke; Dulac, François
2018-05-01
The simulation of fine organic aerosols with CTMs (chemistry-transport models) in the western Mediterranean basin has not been studied until recently. The ChArMEx (the Chemistry-Aerosol Mediterranean Experiment) SOP 1b (Special Observation Period 1b) intensive field campaign in summer of 2013 gathered a large and comprehensive data set of observations, allowing the study of different aspects of the Mediterranean atmosphere including the formation of organic aerosols (OAs) in 3-D models. In this study, we used the CHIMERE CTM to perform simulations for the duration of the SAFMED (Secondary Aerosol Formation in the MEDiterranean) period (July to August 2013) of this campaign. In particular, we evaluated four schemes for the simulation of OA, including the CHIMERE standard scheme, the VBS (volatility basis set) standard scheme with two parameterizations including aging of biogenic secondary OA, and a modified version of the VBS scheme which includes fragmentation and formation of nonvolatile OA. The results from these four schemes are compared to observations at two stations in the western Mediterranean basin, located on Ersa, Cap Corse (Corsica, France), and at Cap Es Pinar (Mallorca, Spain). These observations include OA mass concentration, PMF (positive matrix factorization) results of different OA fractions, and 14C observations showing the fossil or nonfossil origins of carbonaceous particles. Because of the complex orography of the Ersa site, an original method for calculating an orographic representativeness error (ORE) has been developed. It is concluded that the modified VBS scheme is close to observations in all three aspects mentioned above; the standard VBS scheme without BSOA (biogenic secondary organic aerosol) aging also has a satisfactory performance in simulating the mass concentration of OA, but not for the source origin analysis comparisons. In addition, the OA sources over the western Mediterranean basin are explored. OA shows a major biogenic origin, especially at several hundred meters height from the surface; however over the Gulf of Genoa near the surface, the anthropogenic origin is of similar importance. A general assessment of other species was performed to evaluate the robustness of the simulations for this particular domain before evaluating OA simulation schemes. It is also shown that the Cap Corse site presents important orographic complexity, which makes comparison between model simulations and observations difficult. A method was designed to estimate an orographic representativeness error for species measured at Ersa and yields an uncertainty of between 50 and 85 % for primary pollutants, and around 2-10 % for secondary species.
Green's function enriched Poisson solver for electrostatics in many-particle systems
NASA Astrophysics Data System (ADS)
Sutmann, Godehard
2016-06-01
A highly accurate method is presented for the construction of the charge density for the solution of the Poisson equation in particle simulations. The method is based on an operator adjusted source term which can be shown to produce exact results up to numerical precision in the case of a large support of the charge distribution, therefore compensating the discretization error of finite difference schemes. This is achieved by balancing an exact representation of the known Green's function of regularized electrostatic problem with a discretized representation of the Laplace operator. It is shown that the exact calculation of the potential is possible independent of the order of the finite difference scheme but the computational efficiency for higher order methods is found to be superior due to a faster convergence to the exact result as a function of the charge support.
An experimental and theoretical investigation on torrefaction of a large wet wood particle.
Basu, Prabir; Sadhukhan, Anup Kumar; Gupta, Parthapratim; Rao, Shailendra; Dhungana, Alok; Acharya, Bishnu
2014-05-01
A competitive kinetic scheme representing primary and secondary reactions is proposed for torrefaction of large wet wood particles. Drying and diffusive, convective and radiative mode of heat transfer is considered including particle shrinking during torrefaction. The model prediction compares well with the experimental results of both mass fraction residue and temperature profiles for biomass particles. The effect of temperature, residence time and particle size on torrefaction of cylindrical wood particles is investigated through model simulations. For large biomass particles heat transfer is identified as one of the controlling factor for torrefaction. The optimum torrefaction temperature, residence time and particle size are identified. The model may thus be integrated with CFD analysis to estimate the performance of an existing torrefier for a given feedstock. The performance analysis may also provide useful insight for design and development of an efficient torrefier. Copyright © 2014 Elsevier Ltd. All rights reserved.
Lagrangian particles with mixing. I. Simulating scalar transport
NASA Astrophysics Data System (ADS)
Klimenko, A. Y.
2009-06-01
The physical similarity and mathematical equivalence of continuous diffusion and particle random walk forms one of the cornerstones of modern physics and the theory of stochastic processes. The randomly walking particles do not need to posses any properties other than location in physical space. However, particles used in many models dealing with simulating turbulent transport and turbulent combustion do posses a set of scalar properties and mixing between particle properties is performed to reflect the dissipative nature of the diffusion processes. We show that the continuous scalar transport and diffusion can be accurately specified by means of localized mixing between randomly walking Lagrangian particles with scalar properties and assess errors associated with this scheme. Particles with scalar properties and localized mixing represent an alternative formulation for the process, which is selected to represent the continuous diffusion. Simulating diffusion by Lagrangian particles with mixing involves three main competing requirements: minimizing stochastic uncertainty, minimizing bias introduced by numerical diffusion, and preserving independence of particles. These requirements are analyzed for two limited cases of mixing between two particles and mixing between a large number of particles. The problem of possible dependences between particles is most complicated. This problem is analyzed using a coupled chain of equations that has similarities with Bogolubov-Born-Green-Kirkwood-Yvon chain in statistical physics. Dependences between particles can be significant in close proximity of the particles resulting in a reduced rate of mixing. This work develops further ideas introduced in the previously published letter [Phys. Fluids 19, 031702 (2007)]. Paper I of this work is followed by Paper II [Phys. Fluids 19, 065102 (2009)] where modeling of turbulent reacting flows by Lagrangian particles with localized mixing is specifically considered.
NASA Technical Reports Server (NTRS)
Gao, Chloe Y.; Tsigaridis, Kostas; Bauer, Susanne E.
2017-01-01
The gas-particle partitioning and chemical aging of semi-volatile organic aerosol are presented in a newly developed box model scheme, where its effect on the growth, composition, and mixing state of particles is examined. The volatility-basis set (VBS) framework is implemented into the aerosol microphysical scheme MATRIX (Multiconfiguration Aerosol TRacker of mIXing state), which resolves mass and number aerosol concentrations and in multiple mixing-state classes. The new scheme, MATRIX-VBS, has the potential to significantly advance the representation of organic aerosols in Earth system models by improving upon the conventional representation as non-volatile particulate organic matter, often also with an assumed fixed size distribution. We present results from idealized cases representing Beijing, Mexico City, a Finnish forest, and a southeastern US forest, and investigate the evolution of mass concentrations and volatility distributions for organic species across the gas and particle phases, as well as assessing their mixing state among aerosol populations. Emitted semi-volatile primary organic aerosols evaporate almost completely in the intermediate-volatility range, while they remain in the particle phase in the low-volatility range. Their volatility distribution at any point in time depends on the applied emission factors, oxidation by OH radicals, and temperature. We also compare against parallel simulations with the original scheme, which represented only the particulate and non-volatile component of the organic aerosol, examining how differently the condensed-phase organic matter is distributed across the mixing states in the model. The results demonstrate the importance of representing organic aerosol as a semi-volatile aerosol, and explicitly calculating the partitioning of organic species between the gas and particulate phases.
A technique to remove the tensile instability in weakly compressible SPH
NASA Astrophysics Data System (ADS)
Xu, Xiaoyang; Yu, Peng
2018-01-01
When smoothed particle hydrodynamics (SPH) is directly applied for the numerical simulations of transient viscoelastic free surface flows, a numerical problem called tensile instability arises. In this paper, we develop an optimized particle shifting technique to remove the tensile instability in SPH. The basic equations governing free surface flow of an Oldroyd-B fluid are considered, and approximated by an improved SPH scheme. This includes the implementations of the correction of kernel gradient and the introduction of Rusanov flux into the continuity equation. To verify the effectiveness of the optimized particle shifting technique in removing the tensile instability, the impacting drop, the injection molding of a C-shaped cavity, and the extrudate swell, are conducted. The numerical results obtained are compared with those simulated by other numerical methods. A comparison among different numerical techniques (e.g., the artificial stress) to remove the tensile instability is further performed. All numerical results agree well with the available data.
ZENO: N-body and SPH Simulation Codes
NASA Astrophysics Data System (ADS)
Barnes, Joshua E.
2011-02-01
The ZENO software package integrates N-body and SPH simulation codes with a large array of programs to generate initial conditions and analyze numerical simulations. Written in C, the ZENO system is portable between Mac, Linux, and Unix platforms. It is in active use at the Institute for Astronomy (IfA), at NRAO, and possibly elsewhere. Zeno programs can perform a wide range of simulation and analysis tasks. While many of these programs were first created for specific projects, they embody algorithms of general applicability and embrace a modular design strategy, so existing code is easily applied to new tasks. Major elements of the system include: Structured data file utilities facilitate basic operations on binary data, including import/export of ZENO data to other systems.Snapshot generation routines create particle distributions with various properties. Systems with user-specified density profiles can be realized in collisionless or gaseous form; multiple spherical and disk components may be set up in mutual equilibrium.Snapshot manipulation routines permit the user to sift, sort, and combine particle arrays, translate and rotate particle configurations, and assign new values to data fields associated with each particle.Simulation codes include both pure N-body and combined N-body/SPH programs: Pure N-body codes are available in both uniprocessor and parallel versions.SPH codes offer a wide range of options for gas physics, including isothermal, adiabatic, and radiating models. Snapshot analysis programs calculate temporal averages, evaluate particle statistics, measure shapes and density profiles, compute kinematic properties, and identify and track objects in particle distributions.Visualization programs generate interactive displays and produce still images and videos of particle distributions; the user may specify arbitrary color schemes and viewing transformations.
NASA Astrophysics Data System (ADS)
Wang, Dan; Yan, Lixin; Du, YingChao; Huang, Wenhui; Gai, Wei; Tang, Chuanxiang
2018-02-01
Premodulated comblike electron bunch trains are used in a wide range of research fields, such as for wakefield-based particle acceleration and tunable radiation sources. We propose an optimized compression scheme for bunch trains in which a traveling wave accelerator tube and a downstream drift segment are together used as a compressor. When the phase injected into the accelerator tube for the bunch train is set to ≪-10 0 ° , velocity bunching occurs in a deep overcompression mode, which reverses the phase space and maintains a velocity difference within the injected beam, thereby giving rise to a compressed comblike electron bunch train after a few-meter-long drift segment; we call this the deep overcompression scheme. The main benefits of this scheme are the relatively large phase acceptance and the uniformity of compression for the bunch train. The comblike bunch train generated via this scheme is widely tunable: For the two-bunch case, the energy and time spacings can be continuously adjusted from +1 to -1 MeV and from 13 to 3 ps, respectively, by varying the injected phase of the bunch train from -22 0 ° to -14 0 ° . Both theoretical analysis and beam dynamics simulations are presented to study the properties of the deep overcompression scheme.
Re-formulation and Validation of Cloud Microphysics Schemes
NASA Astrophysics Data System (ADS)
Wang, J.; Georgakakos, K. P.
2007-12-01
The research focuses on improving quantitative precipitation forecasts by removing significant uncertainties in current cloud microphysics schemes embedded in models such as WRF and MM5 and cloud-resolving models such as GCE. Reformulation of several production terms in these microphysics schemes was found necessary. When estimating four graupel production terms involved in the accretion between rain, snow and graupel, current microphysics schemes assumes that all raindrops and snow particles are falling at their appropriate mass-weighted mean terminal velocities and thus analytic solutions are able to be found for these production terms. Initial analysis and tests showed that these approximate analytic solutions give significant and systematic overestimates of these terms, and, thus, become one of major error sources of the graupel overproduction and associated extreme radar reflectivity in simulations. These results are corroborated by several reports. For example, the analytic solution overestimates the graupel production by collisions between raindrops and snow by up to 230%. The structure of "pure" snow (not rimed) and "pure graupel" (completely rimed) in current microphysics schemes excludes intermediate forms between "pure" snow and "pure" graupel and thus becomes a significant reason of graupel overproduction in hydrometeor simulations. In addition, the generation of the same density graupel by both the freezing of supercooled water and the riming of snow may cause underestimation of graupel production by freezing. A parameterization scheme of the riming degree of snow is proposed and then a dynamic fallspeed-diameter relationship and density- diameter relationship of rimed snow is assigned to graupel based on the diagnosed riming degree. To test if these new treatments can improve quantitative precipitation forecast, the Hurricane Katrina and a severe winter snowfall event in the Sierra Nevada Range are selected as case studies. A series of control simulation and sensitivity tests was conducted for these two cases. Two statistical methods are used to compare simulated radar reflectivity by the model with that detected by ground-based and airborne radar at different height levels. It was found that the changes made in current microphysical schemes improve QPF and microphysics simulation significantly.
Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; ...
2016-08-09
Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowedmore » by typical CFL restrictions.« less
Combining electromagnetic gyro-kinetic particle-in-cell simulations with collisions
NASA Astrophysics Data System (ADS)
Slaby, Christoph; Kleiber, Ralf; Könies, Axel
2017-09-01
It has been an open question whether for electromagnetic gyro-kinetic particle-in-cell (PIC) simulations pitch-angle collisions and the recently introduced pullback transformation scheme (Mishchenko et al., 2014; Kleiber et al., 2016) are consistent. This question is positively answered by comparing the PIC code EUTERPE with an approach based on an expansion of the perturbed distribution function in eigenfunctions of the pitch-angle collision operator (Legendre polynomials) to solve the electromagnetic drift-kinetic equation with collisions in slab geometry. It is shown how both approaches yield the same results for the frequency and damping rate of a kinetic Alfvén wave and how the perturbed distribution function is substantially changed by the presence of pitch-angle collisions.
NASA Astrophysics Data System (ADS)
Hoose, C.; Hande, L. B.; Mohler, O.; Niemand, M.; Paukert, M.; Reichardt, I.; Ullrich, R.
2016-12-01
Between 0 and -37°C, ice formation in clouds is triggered by aerosol particles acting as heterogeneous ice nuclei. At lower temperatures, heterogeneous ice nucleation on aerosols can occur at lower supersaturations than homogeneous freezing of solutes. In laboratory experiments, the ability of different aerosol species (e.g. desert dusts, soot, biological particles) has been studied in detail and quantified via various theoretical or empirical parameterization approaches. For experiments in the AIDA cloud chamber, we have quantified the ice nucleation efficiency via a temperature- and supersaturation dependent ice nucleation active site density. Here we present a new empirical parameterization scheme for immersion and deposition ice nucleation on desert dust and soot based on these experimental data. The application of this parameterization to the simulation of cirrus clouds, deep convective clouds and orographic clouds will be shown, including the extension of the scheme to the treatment of freezing of rain drops. The results are compared to other heterogeneous ice nucleation schemes. Furthermore, an aerosol-dependent parameterization of contact ice nucleation is presented.
2011-01-01
instability and a dark particle-rich ridge at the front of the fluid can be observed. Figure 5.3 contains a series of plots each taken two minutes into... dark ridge at the front. However, it is also clear that the model does not quantita- tively reproduce this departure from self-similarity of the fluid...behaviors cluster in these diagrams, forming distinct bands. We use color to label these regime bands: white for settled, light for well-mixed, and dark
NASA Astrophysics Data System (ADS)
Orkoulas, Gerassimos; Panagiotopoulos, Athanassios Z.
1994-07-01
In this work, we investigate the liquid-vapor phase transition of the restricted primitive model of ionic fluids. We show that at the low temperatures where the phase transition occurs, the system cannot be studied by conventional molecular simulation methods because convergence to equilibrium is slow. To accelerate convergence, we propose cluster Monte Carlo moves capable of moving more than one particle at a time. We then address the issue of charged particle transfers in grand canonical and Gibbs ensemble Monte Carlo simulations, for which we propose a biased particle insertion/destruction scheme capable of sampling short interparticle distances. We compute the chemical potential for the restricted primitive model as a function of temperature and density from grand canonical Monte Carlo simulations and the phase envelope from Gibbs Monte Carlo simulations. Our calculated phase coexistence curve is in agreement with recent results of Caillol obtained on the four-dimensional hypersphere and our own earlier Gibbs ensemble simulations with single-ion transfers, with the exception of the critical temperature, which is lower in the current calculations. Our best estimates for the critical parameters are T*c=0.053, ρ*c=0.025. We conclude with possible future applications of the biased techniques developed here for phase equilibrium calculations for ionic fluids.
FASTPM: a new scheme for fast simulations of dark matter and haloes
NASA Astrophysics Data System (ADS)
Feng, Yu; Chu, Man-Yat; Seljak, Uroš; McDonald, Patrick
2016-12-01
We introduce FASTPM, a highly scalable approximated particle mesh (PM) N-body solver, which implements the PM scheme enforcing correct linear displacement (1LPT) evolution via modified kick and drift factors. Employing a two-dimensional domain decomposing scheme, FASTPM scales extremely well with a very large number of CPUs. In contrast to Comoving-Lagrangian (COLA) approach, we do not require to split the force or track separately the 2LPT solution, reducing the code complexity and memory requirements. We compare FASTPM with different number of steps (Ns) and force resolution factor (B) against three benchmarks: halo mass function from friends-of-friends halo finder; halo and dark matter power spectrum; and cross-correlation coefficient (or stochasticity), relative to a high-resolution TREEPM simulation. We show that the modified time stepping scheme reduces the halo stochasticity when compared to COLA with the same number of steps and force resolution. While increasing Ns and B improves the transfer function and cross-correlation coefficient, for many applications FASTPM achieves sufficient accuracy at low Ns and B. For example, Ns = 10 and B = 2 simulation provides a substantial saving (a factor of 10) of computing time relative to Ns = 40, B = 3 simulation, yet the halo benchmarks are very similar at z = 0. We find that for abundance matched haloes the stochasticity remains low even for Ns = 5. FASTPM compares well against less expensive schemes, being only 7 (4) times more expensive than 2LPT initial condition generator for Ns = 10 (Ns = 5). Some of the applications where FASTPM can be useful are generating a large number of mocks, producing non-linear statistics where one varies a large number of nuisance or cosmological parameters, or serving as part of an initial conditions solver.
A numerical framework for the direct simulation of dense particulate flow under explosive dispersal
NASA Astrophysics Data System (ADS)
Mo, H.; Lien, F.-S.; Zhang, F.; Cronin, D. S.
2018-05-01
In this paper, we present a Cartesian grid-based numerical framework for the direct simulation of dense particulate flow under explosive dispersal. This numerical framework is established through the integration of the following numerical techniques: (1) operator splitting for partitioned fluid-solid interaction in the time domain, (2) the second-order SSP Runge-Kutta method and third-order WENO scheme for temporal and spatial discretization of governing equations, (3) the front-tracking method for evolving phase interfaces, (4) a field function proposed for low-memory-cost multimaterial mesh generation and fast collision detection, (5) an immersed boundary method developed for treating arbitrarily irregular and changing boundaries, and (6) a deterministic multibody contact and collision model. Employing the developed framework, this paper further studies particle jet formation under explosive dispersal by considering the effects of particle properties, particulate payload morphologies, and burster pressures. By the simulation of the dispersal processes of dense particle systems driven by pressurized gas, in which the driver pressure reaches 1.01325× 10^{10} Pa (10^5 times the ambient pressure) and particles are impulsively accelerated from stationary to a speed that is more than 12000 m/s within 15 μ s, it is demonstrated that the presented framework is able to effectively resolve coupled shock-shock, shock-particle, and particle-particle interactions in complex fluid-solid systems with shocked flow conditions, arbitrarily irregular particle shapes, and realistic multibody collisions.
Optimizing Scheme for Remote Preparation of Four-particle Cluster-like Entangled States
NASA Astrophysics Data System (ADS)
Wang, Dong; Ye, Liu
2011-09-01
Recently, Ma et al. (Opt. Commun. 283:2640, 2010) have proposed a novel scheme for preparing a class of cluster-like entangled states based on a four-particle projective measurement. In this paper, we put forward a new and optimal scheme to realize the remote preparation for this class of cluster-like states with the aid of two bipartite partially entangled channels. Different from the previous scheme, we employ a two-particle projective measurement instead of the four-particle projective measurement during the preparation. Besides, the resource consumptions are computed in our scheme, which include classical communication cost and quantum resource consumptions. Moreover, we have some discussions on the features of our scheme and make some comparisons on resource consumptions and operation complexity between the previous scheme and ours. The results show that our scheme is more economic and feasible compared with the previous.
Accelerating NBODY6 with graphics processing units
NASA Astrophysics Data System (ADS)
Nitadori, Keigo; Aarseth, Sverre J.
2012-07-01
We describe the use of graphics processing units (GPUs) for speeding up the code NBODY6 which is widely used for direct N-body simulations. Over the years, the N2 nature of the direct force calculation has proved a barrier for extending the particle number. Following an early introduction of force polynomials and individual time steps, the calculation cost was first reduced by the introduction of a neighbour scheme. After a decade of GRAPE computers which speeded up the force calculation further, we are now in the era of GPUs where relatively small hardware systems are highly cost effective. A significant gain in efficiency is achieved by employing the GPU to obtain the so-called regular force which typically involves some 99 per cent of the particles, while the remaining local forces are evaluated on the host. However, the latter operation is performed up to 20 times more frequently and may still account for a significant cost. This effort is reduced by parallel SSE/AVX procedures where each interaction term is calculated using mainly single precision. We also discuss further strategies connected with coordinate and velocity prediction required by the integration scheme. This leaves hard binaries and multiple close encounters which are treated by several regularization methods. The present NBODY6-GPU code is well balanced for simulations in the particle range 104-2 × 105 for a dual-GPU system attached to a standard PC.
Self-organizing plasma behavior in multiple grid IEC fusion devices for propulsion
NASA Astrophysics Data System (ADS)
McGuire, Thomas; Dietrich, Carl; Sedwick, Raymond
2004-11-01
Inertial Electrostatic Confinement, IEC, of charged particles for the purpose of producing fusion energy is a low mass alternative to more traditional magnetic and inertial confinement fusion schemes. Experimental fusion production and energy efficiency in IEC devices to date has been hindered by confinement limitations. Analysis of the major loss mechanisms suggests that the low pressure beam-beam interaction regime holds the most promise for improved efficiency operation. Numerical simulation of multiple grid schemes shows greatly increased confinement times over contemporary single grid designs by electrostatic focusing of the ion beams. An analytical model of this focusing is presented. With the increased confinement, beams self-organize from a uniform condition into bunches that oscillate at the bounce frequency. The bunches from neighboring beams are then observed to synchronize with each other. Analysis of the anisotropic collisional dynamics responsible for the synchronization is presented. The importance of focusing and density on the beam dynamics are examined. Further, this synchronization appears to modify the particle distribution so as to maintain the non-maxwellian, beam-like energy profile within a bunch. The ability of synchronization to modify and counter-act the thermalization process is examined analytically at the 2-body interaction level and as a conglomeration of particles via numerical simulation. Detailed description of the experiment under development at MIT to investigate the synchronization phenomenon is presented.
AMITIS: A 3D GPU-Based Hybrid-PIC Model for Space and Plasma Physics
NASA Astrophysics Data System (ADS)
Fatemi, Shahab; Poppe, Andrew R.; Delory, Gregory T.; Farrell, William M.
2017-05-01
We have developed, for the first time, an advanced modeling infrastructure in space simulations (AMITIS) with an embedded three-dimensional self-consistent grid-based hybrid model of plasma (kinetic ions and fluid electrons) that runs entirely on graphics processing units (GPUs). The model uses NVIDIA GPUs and their associated parallel computing platform, CUDA, developed for general purpose processing on GPUs. The model uses a single CPU-GPU pair, where the CPU transfers data between the system and GPU memory, executes CUDA kernels, and writes simulation outputs on the disk. All computations, including moving particles, calculating macroscopic properties of particles on a grid, and solving hybrid model equations are processed on a single GPU. We explain various computing kernels within AMITIS and compare their performance with an already existing well-tested hybrid model of plasma that runs in parallel using multi-CPU platforms. We show that AMITIS runs ∼10 times faster than the parallel CPU-based hybrid model. We also introduce an implicit solver for computation of Faraday’s Equation, resulting in an explicit-implicit scheme for the hybrid model equation. We show that the proposed scheme is stable and accurate. We examine the AMITIS energy conservation and show that the energy is conserved with an error < 0.2% after 500,000 timesteps, even when a very low number of particles per cell is used.
NASA Astrophysics Data System (ADS)
Chen, Guangye; Chacón, Luis; CoCoMans Team
2014-10-01
For decades, the Vlasov-Darwin model has been recognized to be attractive for PIC simulations (to avoid radiative noise issues) in non-radiative electromagnetic regimes. However, the Darwin model results in elliptic field equations that renders explicit time integration unconditionally unstable. Improving on linearly implicit schemes, fully implicit PIC algorithms for both electrostatic and electromagnetic regimes, with exact discrete energy and charge conservation properties, have been recently developed in 1D. This study builds on these recent algorithms to develop an implicit, orbit-averaged, time-space-centered finite difference scheme for the particle-field equations in multiple dimensions. The algorithm conserves energy, charge, and canonical-momentum exactly, even with grid packing. A simple fluid preconditioner allows efficient use of large timesteps, O (√{mi/me}c/veT) larger than the explicit CFL. We demonstrate the accuracy and efficiency properties of the of the algorithm with various numerical experiments in 2D3V.
Duy, Pham K; Chun, Seulah; Chung, Hoeil
2017-11-21
We have systematically characterized Raman scatterings in solid samples with different particle sizes and investigated subsequent trends of particle size-induced intensity variations. For this purpose, both lactose powders and pellets composed of five different particle sizes were prepared. Uniquely in this study, three spectral acquisition schemes with different sizes of laser illuminations and detection windows were employed for the evaluation, since it was expected that the experimental configuration would be another factor potentially influencing the intensity of the lactose peak, along with the particle size itself. In both samples, the distribution of Raman photons became broader with the increase in particle size, as the mean free path of laser photons, the average photon travel distance between consecutive scattering locations, became longer under this situation. When the particle size was the same, the Raman photon distribution was narrower in the pellets since the individual particles were more densely packed in a given volume (the shorter mean free path). When the size of the detection window was small, the number of photons reaching the detector decreased as the photon distribution was larger. Meanwhile, a large-window detector was able to collect the widely distributed Raman photons more effectively; therefore, the trends of intensity change with the variation in particle size were dissimilar depending on the employed spectral acquisition schemes. Overall, the Monte Carlo simulation was effective at probing the photon distribution inside the samples and helped to support the experimental observations.
NASA Astrophysics Data System (ADS)
Sieron, Scott B.; Zhang, Fuqing; Clothiaux, Eugene E.; Zhang, Lily N.; Lu, Yinghui
2018-04-01
Cloud microwave scattering properties for the Community Radiative Transfer Model (CRTM) have previously been created to be consistent with the particle size distributions specified by the WSM6 single-moment microphysics scheme. Here substitution of soft sphere scattering properties with nonspherical particle scattering properties is explored in studies of Hurricane Karl (2010). A nonsphere replaces a sphere of the same maximum dimension, and the number of particles of a given size is scaled by the ratio of the sphere to nonsphere mass to keep the total mass of a given particle size unchanged. The replacement of homogeneous soft sphere snow particles is necessary to resolve a highly evident issue in CRTM simulations: precipitation-affected brightness temperatures are generally warmer at 183 GHz than at 91.7 GHz, whereas the reverse is seen in observations. Using sector snowflakes resolve this issue better than using columns/plates, bullet rosettes, or dendrites. With sector snowflakes, both of these high frequencies have low simulated brightness temperatures compared to observations, providing a clear and consistent suggestion that snow is being overproduced in the examined simulation using WSM6 microphysics. Graupel causes cold biases at lower frequencies which can be reduced by either reducing graupel water contents or replacing the microphysics-consistent spherical graupel particles with sector snowflakes. However, soft spheres are likely the better physical representation of graupel particles. The hypotheses that snow and graupel are overproduced in simulations using WSM6 microphysics shall be examined more systematically in future studies through additional cases and ensemble data assimilation of all-sky microwave radiance observations.
An incompressible two-dimensional multiphase particle-in-cell model for dense particle flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snider, D.M.; O`Rourke, P.J.; Andrews, M.J.
1997-06-01
A two-dimensional, incompressible, multiphase particle-in-cell (MP-PIC) method is presented for dense particle flows. The numerical technique solves the governing equations of the fluid phase using a continuum model and those of the particle phase using a Lagrangian model. Difficulties associated with calculating interparticle interactions for dense particle flows with volume fractions above 5% have been eliminated by mapping particle properties to a Eulerian grid and then mapping back computed stress tensors to particle positions. This approach utilizes the best of Eulerian/Eulerian continuum models and Eulerian/Lagrangian discrete models. The solution scheme allows for distributions of types, sizes, and density of particles,more » with no numerical diffusion from the Lagrangian particle calculations. The computational method is implicit with respect to pressure, velocity, and volume fraction in the continuum solution thus avoiding courant limits on computational time advancement. MP-PIC simulations are compared with one-dimensional problems that have analytical solutions and with two-dimensional problems for which there are experimental data.« less
NASA Astrophysics Data System (ADS)
Astitha, M.; Lelieveld, J.; Abdel Kader, M.; Pozzer, A.; de Meij, A.
2012-11-01
Airborne desert dust influences radiative transfer, atmospheric chemistry and dynamics, as well as nutrient transport and deposition. It directly and indirectly affects climate on regional and global scales. Two versions of a parameterization scheme to compute desert dust emissions are incorporated into the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry). One uses a globally uniform soil particle size distribution, whereas the other explicitly accounts for different soil textures worldwide. We have tested these two versions and investigated the sensitivity to input parameters, using remote sensing data from the Aerosol Robotic Network (AERONET) and dust concentrations and deposition measurements from the AeroCom dust benchmark database (and others). The two versions are shown to produce similar atmospheric dust loads in the N-African region, while they deviate in the Asian, Middle Eastern and S-American regions. The dust outflow from Africa over the Atlantic Ocean is accurately simulated by both schemes, in magnitude, location and seasonality. Approximately 70% of the modelled annual deposition data and 70-75% of the modelled monthly aerosol optical depth (AOD) in the Atlantic Ocean stations lay in the range 0.5 to 2 times the observations for all simulations. The two versions have similar performance, even though the total annual source differs by ~50%, which underscores the importance of transport and deposition processes (being the same for both versions). Even though the explicit soil particle size distribution is considered more realistic, the simpler scheme appears to perform better in several locations. This paper discusses the differences between the two versions of the dust emission scheme, focusing on their limitations and strengths in describing the global dust cycle and suggests possible future improvements.
A New Proxy Electronic Voting Scheme Achieved by Six-Particle Entangled States
NASA Astrophysics Data System (ADS)
Cao, Hai-Jing; Ding, Li-Yuan; Jiang, Xiu-Li; Li, Peng-Fei
2018-03-01
In this paper, we use quantum proxy signature to construct a new secret electronic voting scheme. In our scheme, six particles entangled states function as quantum channels. The voter Alice, the Vote Management Center Bob, the scrutineer Charlie only perform two particles measurements on the Bell bases to realize the electronic voting process. So the scheme reduces the technical difficulty and increases operation efficiency. We use quantum key distribution and one-time pad to guarantee its unconditional security. The significant advantage of our scheme is that transmitted information capacity is twice as much as the capacity of other schemes.
Adaptive power allocation schemes based on IAFS algorithm for OFDM-based cognitive radio systems
NASA Astrophysics Data System (ADS)
Zhang, Shuying; Zhao, Xiaohui; Liang, Cong; Ding, Xu
2017-01-01
In cognitive radio (CR) systems, reasonable power allocation can increase transmission rate of CR users or secondary users (SUs) as much as possible and at the same time insure normal communication among primary users (PUs). This study proposes an optimal power allocation scheme for the OFDM-based CR system with one SU influenced by multiple PU interference constraints. This scheme is based on an improved artificial fish swarm (IAFS) algorithm in combination with the advantage of conventional artificial fish swarm (ASF) algorithm and particle swarm optimisation (PSO) algorithm. In performance comparison of IAFS algorithm with other intelligent algorithms by simulations, the superiority of the IAFS algorithm is illustrated; this superiority results in better performance of our proposed scheme than that of the power allocation algorithms proposed by the previous studies in the same scenario. Furthermore, our proposed scheme can obtain higher transmission data rate under the multiple PU interference constraints and the total power constraint of SU than that of the other mentioned works.
The Impact of Microphysical Schemes on Hurricane Intensity and Track
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Shi, Jainn Jong; Chen, Shuyi S.; Lang, Stephen; Lin, Pay-Liam; Hong, Song-You; Peters-Lidard, Christa; Hou, Arthur
2011-01-01
During the past decade, both research and operational numerical weather prediction models [e.g. the Weather Research and Forecasting Model (WRF)] have started using more complex microphysical schemes originally developed for high-resolution cloud resolving models (CRMs) with 1-2 km or less horizontal resolutions. WRF is a next-generation meso-scale forecast model and assimilation system. It incorporates a modern software framework, advanced dynamics, numerics and data assimilation techniques, a multiple moveable nesting capability, and improved physical packages. WRF can be used for a wide range of applications, from idealized research to operational forecasting, with an emphasis on horizontal grid sizes in the range of 1-10 km. The current WRF includes several different microphysics options. At NASA Goddard, four different cloud microphysics options have been implemented into WRF. The performance of these schemes is compared to those of the other microphysics schemes available in WRF for an Atlantic hurricane case (Katrina). In addition, a brief review of previous modeling studies on the impact of microphysics schemes and processes on the intensity and track of hurricanes is presented and compared against the current Katrina study. In general, all of the studies show that microphysics schemes do not have a major impact on track forecasts but do have more of an effect on the simulated intensity. Also, nearly all of the previous studies found that simulated hurricanes had the strongest deepening or intensification when using only warm rain physics. This is because all of the simulated precipitating hydrometeors are large raindrops that quickly fall out near the eye-wall region, which would hydrostatically produce the lowest pressure. In addition, these studies suggested that intensities become unrealistically strong when evaporative cooling from cloud droplets and melting from ice particles are removed as this results in much weaker downdrafts in the simulated storms. However, there are many differences between the different modeling studies, which are identified and discussed.
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Shi, J.; Chen, S. S>
2007-01-01
Advances in computing power allow atmospheric prediction models to be mn at progressively finer scales of resolution, using increasingly more sophisticated physical parameterizations and numerical methods. The representation of cloud microphysical processes is a key component of these models, over the past decade both research and operational numerical weather prediction models have started using more complex microphysical schemes that were originally developed for high-resolution cloud-resolving models (CRMs). A recent report to the United States Weather Research Program (USWRP) Science Steering Committee specifically calls for the replacement of implicit cumulus parameterization schemes with explicit bulk schemes in numerical weather prediction (NWP) as part of a community effort to improve quantitative precipitation forecasts (QPF). An improved Goddard bulk microphysical parameterization is implemented into a state-of the-art of next generation of Weather Research and Forecasting (WRF) model. High-resolution model simulations are conducted to examine the impact of microphysical schemes on two different weather events (a midlatitude linear convective system and an Atllan"ic hurricane). The results suggest that microphysics has a major impact on the organization and precipitation processes associated with a summer midlatitude convective line system. The 31CE scheme with a cloud ice-snow-hail configuration led to a better agreement with observation in terms of simulated narrow convective line and rainfall intensity. This is because the 3ICE-hail scheme includes dense ice precipitating (hail) particle with very fast fall speed (over 10 m/s). For an Atlantic hurricane case, varying the microphysical schemes had no significant impact on the track forecast but did affect the intensity (important for air-sea interaction)
ME(SSY)**2: Monte Carlo Code for Star Cluster Simulations
NASA Astrophysics Data System (ADS)
Freitag, Marc Dewi
2013-02-01
ME(SSY)**2 stands for “Monte-carlo Experiments with Spherically SYmmetric Stellar SYstems." This code simulates the long term evolution of spherical clusters of stars; it was devised specifically to treat dense galactic nuclei. It is based on the pioneering Monte Carlo scheme proposed by Hénon in the 70's and includes all relevant physical ingredients (2-body relaxation, stellar mass spectrum, collisions, tidal disruption, ldots). It is basically a Monte Carlo resolution of the Fokker-Planck equation. It can cope with any stellar mass spectrum or velocity distribution. Being a particle-based method, it also allows one to take stellar collisions into account in a very realistic way. This unique code, featuring most important physical processes, allows million particle simulations, spanning a Hubble time, in a few CPU days on standard personal computers and provides a wealth of data only rivalized by N-body simulations. The current version of the software requires the use of routines from the "Numerical Recipes in Fortran 77" (http://www.nrbook.com/a/bookfpdf.php).
Electromagnetic gyrokinetic simulation in GTS
NASA Astrophysics Data System (ADS)
Ma, Chenhao; Wang, Weixing; Startsev, Edward; Lee, W. W.; Ethier, Stephane
2017-10-01
We report the recent development in the electromagnetic simulations for general toroidal geometry based on the particle-in-cell gyrokinetic code GTS. Because of the cancellation problem, the EM gyrokinetic simulation has numerical difficulties in the MHD limit where k⊥ρi -> 0 and/or β >me /mi . Recently several approaches has been developed to circumvent this problem: (1) p∥ formulation with analytical skin term iteratively approximated by simulation particles (Yang Chen), (2) A modified p∥ formulation with ∫ dtE∥ used in place of A∥ (Mishichenko); (3) A conservative theme where the electron density perturbation for the Poisson equation is calculated from an electron continuity equation (Bao) ; (4) double-split-weight scheme with two weights, one for Poisson equation and one for time derivative of Ampere's law, each with different splits designed to remove large terms from Vlasov equation (Startsev). These algorithms are being implemented into GTS framework for general toroidal geometry. The performance of these different algorithms will be compared for various EM modes.
Zonal methods for the parallel execution of range-limited N-body simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Kevin J.; Dror, Ron O.; Shaw, David E.
2007-01-20
Particle simulations in fields ranging from biochemistry to astrophysics require the evaluation of interactions between all pairs of particles separated by less than some fixed interaction radius. The applicability of such simulations is often limited by the time required for calculation, but the use of massive parallelism to accelerate these computations is typically limited by inter-processor communication requirements. Recently, Snir [M. Snir, A note on N-body computations with cutoffs, Theor. Comput. Syst. 37 (2004) 295-318] and Shaw [D.E. Shaw, A fast, scalable method for the parallel evaluation of distance-limited pairwise particle interactions, J. Comput. Chem. 26 (2005) 1318-1328] independently introducedmore » two distinct methods that offer asymptotic reductions in the amount of data transferred between processors. In the present paper, we show that these schemes represent special cases of a more general class of methods, and introduce several new algorithms in this class that offer practical advantages over all previously described methods for a wide range of problem parameters. We also show that several of these algorithms approach an approximate lower bound on inter-processor data transfer.« less
Nyx: Adaptive mesh, massively-parallel, cosmological simulation code
NASA Astrophysics Data System (ADS)
Almgren, Ann; Beckner, Vince; Friesen, Brian; Lukic, Zarija; Zhang, Weiqun
2017-12-01
Nyx code solves equations of compressible hydrodynamics on an adaptive grid hierarchy coupled with an N-body treatment of dark matter. The gas dynamics in Nyx use a finite volume methodology on an adaptive set of 3-D Eulerian grids; dark matter is represented as discrete particles moving under the influence of gravity. Particles are evolved via a particle-mesh method, using Cloud-in-Cell deposition/interpolation scheme. Both baryonic and dark matter contribute to the gravitational field. In addition, Nyx includes physics for accurately modeling the intergalactic medium; in optically thin limits and assuming ionization equilibrium, the code calculates heating and cooling processes of the primordial-composition gas in an ionizing ultraviolet background radiation field.
NASA Astrophysics Data System (ADS)
Mann, G. W.; Carslaw, K. S.; Spracklen, D. V.; Ridley, D. A.; Manktelow, P. T.; Chipperfield, M. P.; Pickering, S. J.; Johnson, C. E.
2010-10-01
A new version of the Global Model of Aerosol Processes (GLOMAP) is described, which uses a two-moment pseudo-modal aerosol dynamics approach rather than the original two-moment bin scheme. GLOMAP-mode simulates the multi-component global aerosol, resolving sulfate, sea-salt, dust, black carbon (BC) and particulate organic matter (POM), the latter including primary and biogenic secondary POM. Aerosol processes are simulated in a size-resolved manner including primary emissions, secondary particle formation by binary homogeneous nucleation of sulfuric acid and water, particle growth by coagulation, condensation and cloud-processing and removal by dry deposition, in-cloud and below-cloud scavenging. A series of benchmark observational datasets are assembled against which the skill of the model is assessed in terms of normalised mean bias (b) and correlation coefficient (R). Overall, the model performs well against the datasets in simulating concentrations of aerosol precursor gases, chemically speciated particle mass, condensation nuclei (CN) and cloud condensation nuclei (CCN). Surface sulfate, sea-salt and dust mass concentrations are all captured well, while BC and POM are biased low (but correlate well). Surface CN concentrations compare reasonably well in free troposphere and marine sites, but are underestimated at continental and coastal sites related to underestimation of either primary particle emissions or new particle formation. The model compares well against a compilation of CCN observations covering a range of environments and against vertical profiles of size-resolved particle concentrations over Europe. The simulated global burden, lifetime and wet removal of each of the simulated aerosol components is also examined and each lies close to multi-model medians from the AEROCOM model intercomparison exercise.
NASA Astrophysics Data System (ADS)
Jensen, A. A.; Harrington, J. Y.; Morrison, H.
2017-12-01
A quasi-idealized 3D squall line (based on a June 2007 Oklahoma case) is simulated using a novel bulk microphysics scheme called the Ice-Spheroids Habit Model with Aspect-ratio Evolution (ISHMAEL). In ISHMAEL, the evolution of ice particle properties, such as mass, shape, maximum diameter, density, and fall speed, are tracked as these properties evolve from vapor growth, sublimation, riming, and melting. Thus, ice properties evolve from various microphysical processes without needing separate unrimed and rimed ice categories. Simulation results show that ISHMAEL produces both a squall-line transition zone and an enhanced stratiform precipitation region. The ice particle properties produced in this simulation are analyzed and compared to observations to determine the characteristics of ice that lead to the development of these squall-line features. It is shown that rimed particles advected rearward from the convective region produce the enhanced stratiform precipitation region. The development of the transition zone results from hydrometer sorting: the evolution of ice particle properties in the convective region produces specific fall speeds that favor significant ice advecting rearward of the transition zone before reaching the melting level, causing a local minimum in precipitation rate and reflectivity there. Microphysical sensitivity studies, for example turning rime splintering off, that lead to changes in ice particle properties reveal that the fall speed of ice particles largely determines both the location of the enhanced stratiform precipitation region and whether or not a transition zone forms.
NASA Astrophysics Data System (ADS)
Zohdi, T. I.
2016-03-01
In industry, particle-laden fluids, such as particle-functionalized inks, are constructed by adding fine-scale particles to a liquid solution, in order to achieve desired overall properties in both liquid and (cured) solid states. However, oftentimes undesirable particulate agglomerations arise due to some form of mutual-attraction stemming from near-field forces, stray electrostatic charges, process ionization and mechanical adhesion. For proper operation of industrial processes involving particle-laden fluids, it is important to carefully breakup and disperse these agglomerations. One approach is to target high-frequency acoustical pressure-pulses to breakup such agglomerations. The objective of this paper is to develop a computational model and corresponding solution algorithm to enable rapid simulation of the effect of acoustical pulses on an agglomeration composed of a collection of discrete particles. Because of the complex agglomeration microstructure, containing gaps and interfaces, this type of system is extremely difficult to mesh and simulate using continuum-based methods, such as the finite difference time domain or the finite element method. Accordingly, a computationally-amenable discrete element/discrete ray model is developed which captures the primary physical events in this process, such as the reflection and absorption of acoustical energy, and the induced forces on the particulate microstructure. The approach utilizes a staggered, iterative solution scheme to calculate the power transfer from the acoustical pulse to the particles and the subsequent changes (breakup) of the pulse due to the particles. Three-dimensional examples are provided to illustrate the approach.
NASA Astrophysics Data System (ADS)
Guo, Li M.; Shu, T.; Li, Zhi Q.; Ju, Jin C.
2017-12-01
The compactness and miniaturization of high-power-microwave (HPM) systems are drawing more and more attention. Based on this demand, HPM generators without a guiding magnetic field are being developed. This paper presents an X-band Cherenkov type HPM oscillator without the guiding magnetic field. By particle-in-cell codes, this oscillator achieves an efficiency of 40% in simulation. When the diode voltage and current are 620 kV and 9.0 kA, respectively, a TEM mode microwave is generated with a power of 2.2 GW and a frequency of 9.1 GHz. In this oscillator, electrons are modulated in both longitudinal and radial directions, and the radial modulation has a significant effect on the energy conversion efficiency. As analyzed in this paper, the different radial modulation effects depend on the phase matching differences of the microwave and electrons. The modified scheme of simulations achieves a structure with an efficient longitudinal beam-wave interaction and optimized radial modulation.
NASA Astrophysics Data System (ADS)
García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin
2014-10-01
Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.
NASA Astrophysics Data System (ADS)
Chen, Y. H.; Kuo, C. P.; Huang, X.; Yang, P.
2017-12-01
Clouds play an important role in the Earth's radiation budget, and thus realistic and comprehensive treatments of cloud optical properties and cloud-sky radiative transfer are crucial for simulating weather and climate. However, most GCMs neglect LW scattering effects by clouds and tend to use inconsistent cloud SW and LW optical parameterizations. Recently, co-authors of this study have developed a new LW optical properties parameterization for ice clouds, which is based on ice cloud particle statistics from MODIS measurements and state-of-the-art scattering calculation. A two-stream multiple-scattering scheme has also been implemented into the RRTMG_LW, a widely used longwave radiation scheme by climate modeling centers. This study is to integrate both the new LW cloud-radiation scheme for ice clouds and the modified RRTMG_LW with scattering capability into the NCAR CESM to improve the cloud longwave radiation treatment. A number of single column model (SCM) simulations using the observation from the ARM SGP site on July 18 to August 4 in 1995 are carried out to assess the impact of new LW optical properties of clouds and scattering-enabled radiation scheme on simulated radiation budget and cloud radiative effect (CRE). The SCM simulation allows interaction between cloud and radiation schemes with other parameterizations, but the large-scale forcing is prescribed or nudged. Comparing to the results from the SCM of the standard CESM, the new ice cloud optical properties alone leads to an increase of LW CRE by 26.85 W m-2 in average, as well as an increase of the downward LW flux at surface by 6.48 W m-2. Enabling LW cloud scattering further increases the LW CRE by another 3.57 W m-2 and the downward LW flux at the surface by 0.2 W m-2. The change of LW CRE is mainly due to an increase of cloud top height, which enhances the LW CRE. A long-term simulation of CESM will be carried out to further understand the impact of such changes on simulated climates.
Parallelization Issues and Particle-In Codes.
NASA Astrophysics Data System (ADS)
Elster, Anne Cathrine
1994-01-01
"Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on the simulation may lead to further improvements. For example, in the case of mean particle drift, it is often advantageous to partition the grid primarily along the direction of the drift. The particle-in-cell codes for this study were tested using physical parameters, which lead to predictable phenomena including plasma oscillations and two-stream instabilities. An overview of the most central references related to parallel particle codes is also given.
3D Lagrangian VPM: simulations of the near-wake of an actuator disc and horizontal axis wind turbine
NASA Astrophysics Data System (ADS)
Berdowski, T.; Ferreira, C.; Walther, J.
2016-09-01
The application of a 3-dimensional Lagrangian vortex particle method has been assessed for modelling the near-wake of an axisymmetrical actuator disc and 3-bladed horizontal axis wind turbine with prescribed circulation from the MEXICO (Model EXperiments In COntrolled conditions) experiment. The method was developed in the framework of the open- source Parallel Particle-Mesh library for handling the efficient data-parallelism on a CPU (Central Processing Unit) cluster, and utilized a O(N log N)-type fast multipole method for computational acceleration. Simulations with the actuator disc resulted in a wake expansion, velocity deficit profile, and induction factor that showed a close agreement with theoretical, numerical, and experimental results from literature. Also the shear layer expansion was present; the Kelvin-Helmholtz instability in the shear layer was triggered due to the round-off limitations of a numerical method, but this instability was delayed to beyond 1 diameter downstream due to the particle smoothing. Simulations with the 3-bladed turbine demonstrated that a purely 3-dimensional flow representation is challenging to model with particles. The manifestation of local complex flow structures of highly stretched vortices made the simulation unstable, but this was successfully counteracted by the application of a particle strength exchange scheme. The axial and radial velocity profile over the near wake have been compared to that of the original MEXICO experiment, which showed close agreement between results.
Particle separation by phase modulated surface acoustic waves.
Simon, Gergely; Andrade, Marco A B; Reboud, Julien; Marques-Hueso, Jose; Desmulliez, Marc P Y; Cooper, Jonathan M; Riehle, Mathis O; Bernassau, Anne L
2017-09-01
High efficiency isolation of cells or particles from a heterogeneous mixture is a critical processing step in lab-on-a-chip devices. Acoustic techniques offer contactless and label-free manipulation, preserve viability of biological cells, and provide versatility as the applied electrical signal can be adapted to various scenarios. Conventional acoustic separation methods use time-of-flight and achieve separation up to distances of quarter wavelength with limited separation power due to slow gradients in the force. The method proposed here allows separation by half of the wavelength and can be extended by repeating the modulation pattern and can ensure maximum force acting on the particles. In this work, we propose an optimised phase modulation scheme for particle separation in a surface acoustic wave microfluidic device. An expression for the acoustic radiation force arising from the interaction between acoustic waves in the fluid was derived. We demonstrated, for the first time, that the expression of the acoustic radiation force differs in surface acoustic wave and bulk devices, due to the presence of a geometric scaling factor. Two phase modulation schemes are investigated theoretically and experimentally. Theoretical findings were experimentally validated for different mixtures of polystyrene particles confirming that the method offers high selectivity. A Monte-Carlo simulation enabled us to assess performance in real situations, including the effects of particle size variation and non-uniform acoustic field on sorting efficiency and purity, validating the ability to separate particles with high purity and high resolution.
Numerical simulation of the hydrodynamic instabilities of Richtmyer-Meshkov and Rayleigh-Taylor
NASA Astrophysics Data System (ADS)
Fortova, S. V.; Shepelev, V. V.; Troshkin, O. V.; Kozlov, S. A.
2017-09-01
The paper presents the results of numerical simulation of the development of hydrodynamic instabilities of Richtmyer-Meshkov and Rayleigh-Taylor encountered in experiments [1-3]. For the numerical solution used the TPS software package (Turbulence Problem Solver) that implements a generalized approach to constructing computer programs for a wide range of problems of hydrodynamics, described by the system of equations of hyperbolic type. As numerical methods are used the method of large particles and ENO-scheme of the second order with Roe solver for the approximate solution of the Riemann problem.
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.
2011-01-01
Increases in computing resources have allowed for the utilization of high-resolution weather forecast models capable of resolving cloud microphysical and precipitation processes among varying numbers of hydrometeor categories. Several microphysics schemes are currently available within the Weather Research and Forecasting (WRF) model, ranging from single-moment predictions of precipitation content to double-moment predictions that include a prediction of particle number concentrations. Each scheme incorporates several assumptions related to the size distribution, shape, and fall speed relationships of ice crystals in order to simulate cold-cloud processes and resulting precipitation. Field campaign data offer a means of evaluating the assumptions present within each scheme. The Canadian CloudSat/CALIPSO Validation Project (C3VP) represented collaboration among the CloudSat, CALIPSO, and NASA Global Precipitation Measurement mission communities, to observe cold season precipitation processes relevant to forecast model evaluation and the eventual development of satellite retrievals of cloud properties and precipitation rates. During the C3VP campaign, widespread snowfall occurred on 22 January 2007, sampled by aircraft and surface instrumentation that provided particle size distributions, ice water content, and fall speed estimations along with traditional surface measurements of temperature and precipitation. In this study, four single-moment and two double-moment microphysics schemes were utilized to generate hypothetical WRF forecasts of the event, with C3VP data used in evaluation of their varying assumptions. Schemes that incorporate flexibility in size distribution parameters and density assumptions are shown to be preferable to fixed constants, and that a double-moment representation of the snow category may be beneficial when representing the effects of aggregation. These results may guide forecast centers in optimal configurations of their forecast models for winter weather and identify best practices present within these various schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gatsonis, Nikolaos A.; Spirkin, Anton
2009-06-01
The mathematical formulation and computational implementation of a three-dimensional particle-in-cell methodology on unstructured Delaunay-Voronoi tetrahedral grids is presented. The method allows simulation of plasmas in complex domains and incorporates the duality of the Delaunay-Voronoi in all aspects of the particle-in-cell cycle. Charge assignment and field interpolation weighting schemes of zero- and first-order are formulated based on the theory of long-range constraints. Electric potential and fields are derived from a finite-volume formulation of Gauss' law using the Voronoi-Delaunay dual. Boundary conditions and the algorithms for injection, particle loading, particle motion, and particle tracking are implemented for unstructured Delaunay grids. Error andmore » sensitivity analysis examines the effects of particles/cell, grid scaling, and timestep on the numerical heating, the slowing-down time, and the deflection times. The problem of current collection by cylindrical Langmuir probes in collisionless plasmas is used for validation. Numerical results compare favorably with previous numerical and analytical solutions for a wide range of probe radius to Debye length ratios, probe potentials, and electron to ion temperature ratios. The versatility of the methodology is demonstrated with the simulation of a complex plasma microsensor, a directional micro-retarding potential analyzer that includes a low transparency micro-grid.« less
Photodetachment and Doppler laser cooling of anionic molecules
NASA Astrophysics Data System (ADS)
Gerber, Sebastian; Fesel, Julian; Doser, Michael; Comparat, Daniel
2018-02-01
We propose to extend laser-cooling techniques, so far only achieved for neutral molecules, to molecular anions. A detailed computational study is performed for {{{C}}}2- molecules stored in Penning traps using GPU based Monte Carlo simulations. Two cooling schemes—Doppler laser cooling and photodetachment cooling—are investigated. The sympathetic cooling of antiprotons is studied for the Doppler cooling scheme, where it is shown that cooling of antiprotons to subKelvin temperatures could becomes feasible, with impacts on the field of antimatter physics. The presented cooling schemes also have applications for the generation of cold, negatively charged particle sources and for the sympathetic cooling of other molecular anions.
Realistic dust and water cycles in the MarsWRF GCM using coupled two-moment microphysics
NASA Astrophysics Data System (ADS)
Lee, Christopher; Richardson, Mark Ian; Mischna, Michael A.; Newman, Claire E.
2017-10-01
Dust and water ice aerosols significantly complicate the Martian climate system because the evolution of the two aerosol fields is coupled through microphysics and because both aerosols strongly interact with visible and thermal radiation. The combination of strong forcing feedback and coupling has led to various problems in understanding and modeling of the Martian climate: in reconciling cloud abundances at different locations in the atmosphere, in generating a stable dust cycle, and in preventing numerical instability within models.Using a new microphysics model inside the MarsWRF GCM we show that fully coupled simulations produce more realistic simulation of the Martian climate system compared to a dry, dust only simulations. In the coupled simulations, interannual variability and intra-annual variability are increased, strong 'solstitial pause' features are produced in both winter high latitude regions, and dust storm seasons are more varied, with early southern summer (Ls 180) dust storms and/or more than one storm occurring in some seasons.A new microphysics scheme was developed as a part of this work and has been included in the MarsWRF model. The scheme uses split spectral/spatial size distribution numerics with adaptive bin sizes to track particle size evolution. Significantly, this scheme is highly accurate, numerically stable, and is capable of running with time steps commensurate with those of the parent atmospheric model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gettelman, A.; Liu, Xiaohong; Ghan, Steven J.
2010-09-28
A process-based treatment of ice supersaturation and ice-nucleation is implemented in the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM). The new scheme is designed to allow (1) supersaturation with respect to ice, (2) ice nucleation by aerosol particles and (3) ice cloud cover consistent with ice microphysics. The scheme is implemented with a 4-class 2 moment microphysics code and is used to evaluate ice cloud nucleation mechanisms and supersaturation in CAM. The new model is able to reproduce field observations of ice mass and mixed phase cloud occurrence better than previous versions of the model. Simulations indicatemore » heterogeneous freezing and contact nucleation on dust are both potentially important over remote areas of the Arctic. Cloud forcing and hence climate is sensitive to different formulations of the ice microphysics. Arctic radiative fluxes are sensitive to the parameterization of ice clouds. These results indicate that ice clouds are potentially an important part of understanding cloud forcing and potential cloud feedbacks, particularly in the Arctic.« less
Performance of the Goddard Multiscale Modeling Framework with Goddard Ice Microphysical Schemes
NASA Technical Reports Server (NTRS)
Chern, Jiun-Dar; Tao, Wei-Kuo; Lang, Stephen E.; Matsui, Toshihisa; Li, J.-L.; Mohr, Karen I.; Skofronick-Jackson, Gail M.; Peters-Lidard, Christa D.
2016-01-01
The multiscale modeling framework (MMF), which replaces traditional cloud parameterizations with cloud-resolving models (CRMs) within a host atmospheric general circulation model (GCM), has become a new approach for climate modeling. The embedded CRMs make it possible to apply CRM-based cloud microphysics directly within a GCM. However, most such schemes have never been tested in a global environment for long-term climate simulation. The benefits of using an MMF to evaluate rigorously and improve microphysics schemes are here demonstrated. Four one-moment microphysical schemes are implemented into the Goddard MMF and their results validated against three CloudSat/CALIPSO cloud ice products and other satellite data. The new four-class (cloud ice, snow, graupel, and frozen drops/hail) ice scheme produces a better overall spatial distribution of cloud ice amount, total cloud fractions, net radiation, and total cloud radiative forcing than earlier three-class ice schemes, with biases within the observational uncertainties. Sensitivity experiments are conducted to examine the impact of recently upgraded microphysical processes on global hydrometeor distributions. Five processes dominate the global distributions of cloud ice and snow amount in long-term simulations: (1) allowing for ice supersaturation in the saturation adjustment, (2) three additional correction terms in the depositional growth of cloud ice to snow, (3) accounting for cloud ice fall speeds, (4) limiting cloud ice particle size, and (5) new size-mapping schemes for snow and graupel. Despite the cloud microphysics improvements, systematic errors associated with subgrid processes, cyclic lateral boundaries in the embedded CRMs, and momentum transport remain and will require future improvement.
Generation of high-field narrowband terahertz radiation by counterpropagating plasma wakefields
NASA Astrophysics Data System (ADS)
Timofeev, I. V.; Annenkov, V. V.; Volchok, E. P.
2017-10-01
It is found that nonlinear interaction of plasma wakefields driven by counterpropagating laser or particle beams can efficiently generate high-power electromagnetic radiation at the second harmonic of the plasma frequency. Using a simple analytical theory and particle-in-cell simulations, we show that this phenomenon can be attractive for producing high-field ( ˜10 MV/cm) tunable terahertz radiation with a narrow line width. For laser drivers produced by existing petawatt-class systems, this nonlinear process opens the way to the generation of gigawatt, multi-millijoule terahertz pulses which are not presently available for any other generating schemes.
Cell-veto Monte Carlo algorithm for long-range systems.
Kapfer, Sebastian C; Krauth, Werner
2016-09-01
We present a rigorous efficient event-chain Monte Carlo algorithm for long-range interacting particle systems. Using a cell-veto scheme within the factorized Metropolis algorithm, we compute each single-particle move with a fixed number of operations. For slowly decaying potentials such as Coulomb interactions, screening line charges allow us to take into account periodic boundary conditions. We discuss the performance of the cell-veto Monte Carlo algorithm for general inverse-power-law potentials, and illustrate how it provides a new outlook on one of the prominent bottlenecks in large-scale atomistic Monte Carlo simulations.
Methods for High-Order Multi-Scale and Stochastic Problems Analysis, Algorithms, and Applications
2016-10-17
finite volume schemes, discontinuous Galerkin finite element method, and related methods, for solving computational fluid dynamics (CFD) problems and...approximation for finite element methods. (3) The development of methods of simulation and analysis for the study of large scale stochastic systems of...laws, finite element method, Bernstein-Bezier finite elements , weakly interacting particle systems, accelerated Monte Carlo, stochastic networks 16
An Investigation into Solution Verification for CFD-DEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fullmer, William D.; Musser, Jordan
This report presents the study of the convergence behavior of the computational fluid dynamicsdiscrete element method (CFD-DEM) method, specifically National Energy Technology Laboratory’s (NETL) open source MFiX code (MFiX-DEM) with a diffusion based particle-tocontinuum filtering scheme. In particular, this study focused on determining if the numerical method had a solution in the high-resolution limit where the grid size is smaller than the particle size. To address this uncertainty, fixed particle beds of two primary configurations were studied: i) fictitious beds where the particles are seeded with a random particle generator, and ii) instantaneous snapshots from a transient simulation of anmore » experimentally relevant problem. Both problems considered a uniform inlet boundary and a pressure outflow. The CFD grid was refined from a few particle diameters down to 1/6 th of a particle diameter. The pressure drop between two vertical elevations, averaged across the bed cross-section was considered as the system response quantity of interest. A least-squares regression method was used to extrapolate the grid-dependent results to an approximate “grid-free” solution in the limit of infinite resolution. The results show that the diffusion based scheme does yield a converging solution. However, the convergence is more complicated than encountered in simpler, single-phase flow problems showing strong oscillations and, at times, oscillations superimposed on top of globally non-monotonic behavior. The challenging convergence behavior highlights the importance of using at least four grid resolutions in solution verification problems so that (over-determined) regression-based extrapolation methods may be applied to approximate the grid-free solution. The grid-free solution is very important in solution verification and VVUQ exercise in general as the difference between it and the reference solution largely determines the numerical uncertainty. By testing different randomized particle configurations of the same general problem (for the fictitious case) or different instances of freezing a transient simulation, the numerical uncertainties appeared to be on the same order of magnitude as ensemble or time averaging uncertainties. By testing different drag laws, almost all cases studied show that model form uncertainty in this one, very important closure relation was larger than the numerical uncertainty, at least with a reasonable CFD grid, roughly five particle diameters. In this study, the diffusion width (filtering length scale) was mostly set at a constant of six particle diameters. A few exploratory tests were performed to show that similar convergence behavior was observed for diffusion widths greater than approximately two particle diameters. However, this subject was not investigated in great detail because determining an appropriate filter size is really a validation question which must be determined by comparison to experimental or highly accurate numerical data. Future studies are being considered targeting solution verification of transient simulations as well as validation of the filter size with direct numerical simulation data.« less
The Impact of Microphysical Schemes on Intensity and Track of Hurricane
NASA Technical Reports Server (NTRS)
Tao, W. K.; Shi, J. J.; Chen, S. S.; Lang, S.; Lin, P.; Hong, S. Y.; Peters-Lidard, C.; Hou, A.
2010-01-01
During the past decade, both research and operational numerical weather prediction models [e.g. Weather Research and Forecasting Model (WRF)] have started using more complex microphysical schemes originally developed for high-resolution cloud resolving models (CRMs) with a 1-2 km or less horizontal resolutions. The WRF is a next-generation meso-scale forecast model and assimilation system that has incorporated a modern software framework, advanced dynamics, numeric and data assimilation techniques, a multiple moveable nesting capability, and improved physical packages. The WRF model can be used for a wide range of applications, from idealized research to operational forecasting, with an emphasis on horizontal grid sizes in the range of 1-10 km. The current WRF includes several different microphysics options. At Goddard, four different cloud microphysics schemes (warm rain only, two-class of ice, two three-class of ice with either graupel or hail) are implemented into the WRF. The performances of these schemes have been compared to those from other WRF microphysics scheme options for an Atlantic hurricane case. In addition, a brief review and comparison on the previous modeling studies on the impact of microphysics schemes and microphysical processes on intensity and track of hurricane will be presented. Generally, almost all modeling studies found that the microphysics schemes did not have major impacts on track forecast, but did have more effect on the intensity. All modeling studies found that the simulated hurricane has rapid deepening and/or intensification for the warm rain-only case. It is because all hydrometeors were very large raindrops, and they fell out quickly at and near the eye-wall region. This would hydrostatically produce the lowest pressure. In addition, these modeling studies suggested that the simulated hurricane becomes unrealistically strong by removing the evaporative cooling of cloud droplets and melting of ice particles. This is due to the much weaker downdraft simulated. However, there are many differences between different modeling studies and these differences were identified and discussed.
On-chip particle trapping and manipulation
NASA Astrophysics Data System (ADS)
Leake, Kaelyn Danielle
The ability to control and manipulate the world around us is human nature. Humans and our ancestors have used tools for millions of years. Only in recent years have we been able to control objects at such small levels. In order to understand the world around us it is frequently necessary to interact with the biological world. Optical trapping and manipulation offer a non-invasive way to move, sort and interact with particles and cells to see how they react to the world around them. Optical tweezers are ideal in their abilities but they require large, non-portable, and expensive setups limiting how and where we can use them. A cheap portable platform is required in order to have optical manipulation reach its full potential. On-chip technology offers a great solution to this challenge. We focused on the Liquid-Core Anti-Resonant Reflecting Optical Waveguide (liquid-core ARROW) for our work. The ARROW is an ideal platform, which has anti-resonant layers which allow light to be guided in liquids, allowing for particles to easily be manipulated. It is manufactured using standard silicon manufacturing techniques making it easy to produce. The planner design makes it easy to integrate with other technologies. Initially I worked to improve the ARROW chip by reducing the intersection losses and by reducing the fluorescence and background on the ARROW chip. The ARROW chip has already been used to trap and push particles along its channel but here I introduce several new methods of particle trapping and manipulation on the ARROW chip. Traditional two beam traps use two counter propagating beams. A trapping scheme that uses two orthogonal beams which counter to first instinct allow for trapping at their intersection is introduced. This scheme is thoroughly predicted and analyzed using realistic conditions. Simulations of this method were done using a program which looks at both the fluidics and optical sources to model complex situations. These simulations were also used to model and predict a sorting method which combines fluid flow with a single optical source to automatically sort dielectric particles by size in waveguide networks. These simulations were shown to be accurate when repeated on-chip. Lastly I introduce a particle trapping technique that uses Multimode Interference(MMI) patterns in order to trap multiple particles at once. The location of the traps can be adjusted as can the number of trapping location by changing the input wavelength. By changing the wavelength back and forth between two values this MMI can be used to pass a particle down the channel like a conveyor belt.
SANTA BARBARA CLUSTER COMPARISON TEST WITH DISPH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saitoh, Takayuki R.; Makino, Junichiro, E-mail: saitoh@elsi.jp
2016-06-01
The Santa Barbara cluster comparison project revealed that there is a systematic difference between entropy profiles of clusters of galaxies obtained by Eulerian mesh and Lagrangian smoothed particle hydrodynamics (SPH) codes: mesh codes gave a core with a constant entropy, whereas SPH codes did not. One possible reason for this difference is that mesh codes are not Galilean invariant. Another possible reason is the problem of the SPH method, which might give too much “protection” to cold clumps because of the unphysical surface tension induced at contact discontinuities. In this paper, we apply the density-independent formulation of SPH (DISPH), whichmore » can handle contact discontinuities accurately, to simulations of a cluster of galaxies and compare the results with those with the standard SPH. We obtained the entropy core when we adopt DISPH. The size of the core is, however, significantly smaller than those obtained with mesh simulations and is comparable to those obtained with quasi-Lagrangian schemes such as “moving mesh” and “mesh free” schemes. We conclude that both the standard SPH without artificial conductivity and Eulerian mesh codes have serious problems even with such an idealized simulation, while DISPH, SPH with artificial conductivity, and quasi-Lagrangian schemes have sufficient capability to deal with it.« less
Simulation study on beam loss in the alpha bucket regime during SIS-100 proton operation
NASA Astrophysics Data System (ADS)
Sorge, S.
2018-02-01
Crossing the transition energy γt in synchrotrons for high intensity proton beams requires well tuned jump schemes and is usually accompanied by longitudinal emittance growth. In order to avoid γt crossing during proton operation in the projected SIS-100 synchrotron special high-γt lattice settings have been developed, in order to keep γt above the beam extraction energy. A further advantage of this scheme is the formation of alpha buckets which naturally lead to short proton bunches, required for the foreseen production and storage of antiprotons for the FAIR facility. Special attention is turned on the imperfections of the superconducting SIS-100 magnets because together with the high-γt lattice settings, they could potentially lead to enhanced beam loss. The aim of the present work is to estimate the beam loss by means of particle tracking simulations.
A Particle Module for the PLUTO Code. I. An Implementation of the MHD–PIC Equations
NASA Astrophysics Data System (ADS)
Mignone, A.; Bodo, G.; Vaidya, B.; Mattia, G.
2018-05-01
We describe an implementation of a particle physics module available for the PLUTO code appropriate for the dynamical evolution of a plasma consisting of a thermal fluid and a nonthermal component represented by relativistic charged particles or cosmic rays (CRs). While the fluid is approached using standard numerical schemes for magnetohydrodynamics, CR particles are treated kinetically using conventional Particle-In-Cell (PIC) techniques. The module can be used either to describe test-particle motion in the fluid electromagnetic field or to solve the fully coupled magnetohydrodynamics (MHD)–PIC system of equations with particle backreaction on the fluid as originally introduced by Bai et al. Particle backreaction on the fluid is included in the form of momentum–energy feedback and by introducing the CR-induced Hall term in Ohm’s law. The hybrid MHD–PIC module can be employed to study CR kinetic effects on scales larger than the (ion) skin depth provided that the Larmor gyration scale is properly resolved. When applicable, this formulation avoids resolving microscopic scales, offering substantial computational savings with respect to PIC simulations. We present a fully conservative formulation that is second-order accurate in time and space, and extends to either the Runge–Kutta (RK) or the corner transport upwind time-stepping schemes (for the fluid), while a standard Boris integrator is employed for the particles. For highly energetic relativistic CRs and in order to overcome the time-step restriction, a novel subcycling strategy that retains second-order accuracy in time is presented. Numerical benchmarks and applications including Bell instability, diffusive shock acceleration, and test-particle acceleration in reconnecting layers are discussed.
Gas stripping and mixing in galaxy clusters: a numerical comparison study
NASA Astrophysics Data System (ADS)
Heß, Steffen; Springel, Volker
2012-11-01
The ambient hot intrahalo gas in clusters of galaxies is constantly fed and stirred by infalling galaxies, a process that can be studied in detail with cosmological hydrodynamical simulations. However, different numerical methods yield discrepant predictions for crucial hydrodynamical processes, leading for example to different entropy profiles in clusters of galaxies. In particular, the widely used Lagrangian smoothed particle hydrodynamics (SPH) scheme is suspected to strongly damp fluid instabilities and turbulence, which are both crucial to establish the thermodynamic structure of clusters. In this study, we test to which extent our recently developed Voronoi particle hydrodynamics (VPH) scheme yields different results for the stripping of gas out of infalling galaxies and for the bulk gas properties of cluster. We consider both the evolution of isolated galaxy models that are exposed to a stream of intracluster medium or are dropped into cluster models, as well as non-radiative cosmological simulations of cluster formation. We also compare our particle-based method with results obtained with a fundamentally different discretization approach as implemented in the moving-mesh code AREPO. We find that VPH leads to noticeably faster stripping of gas out of galaxies than SPH, in better agreement with the mesh-code than with SPH. We show that despite the fact that VPH in its present form is not as accurate as the moving mesh code in our investigated cases, its improved accuracy of gradient estimates makes VPH an attractive alternative to SPH.
NASA Astrophysics Data System (ADS)
Zhang, W. L.; Qiao, B.; Shen, X. F.; You, W. Y.; Huang, T. W.; Yan, X. Q.; Wu, S. Z.; Zhou, C. T.; He, X. T.
2016-09-01
Laser-driven ion acceleration potentially offers a compact, cost-effective alternative to conventional accelerators for scientific, technological, and health-care applications. A novel scheme for heavy ion acceleration in near-critical plasmas via staged shock waves driven by intense laser pulses is proposed, where, in front of the heavy ion target, a light ion layer is used for launching a high-speed electrostatic shock wave. This shock is enhanced at the interface before it is transmitted into the heavy ion plasmas. Monoenergetic heavy ion beam with much higher energy can be generated by the transmitted shock, comparing to the shock wave acceleration in pure heavy ion target. Two-dimensional particle-in-cell simulations show that quasi-monoenergetic {{{C}}}6+ ion beams with peak energy 168 MeV and considerable particle number 2.1 × {10}11 are obtained by laser pulses at intensity of 1.66 × {10}20 {{W}} {{cm}}-2 in such staged shock wave acceleration scheme. Similarly a high-quality {{Al}}10+ ion beam with a well-defined peak with energy 250 MeV and spread δ E/{E}0=30 % can also be obtained in this scheme.
Can Condensing Organic Aerosols Lead to Less Cloud Particles?
NASA Astrophysics Data System (ADS)
Gao, C. Y.; Tsigaridis, K.; Bauer, S.
2017-12-01
We examined the impact of condensing organic aerosols on activated cloud number concentration in a new aerosol microphysics box model, MATRIX-VBS. The model includes the volatility-basis set (VBS) framework in an aerosol microphysical scheme MATRIX (Multiconfiguration Aerosol TRacker of mIXing state) that resolves aerosol mass and number concentrations and aerosol mixing state. Preliminary results show that by including the condensation of organic aerosols, the new model (MATRIX-VBS) has less activated particles compared to the original model (MATRIX), which treats organic aerosols as non-volatile. Parameters such as aerosol chemical composition, mass and number concentrations, and particle sizes which affect activated cloud number concentration are thoroughly evaluated via a suite of Monte-Carlo simulations. The Monte-Carlo simulations also provide information on which climate-relevant parameters play a critical role in the aerosol evolution in the atmosphere. This study also helps simplifying the newly developed box model which will soon be implemented in the global model GISS ModelE as a module.
GPU accelerated particle visualization with Splotch
NASA Astrophysics Data System (ADS)
Rivi, M.; Gheller, C.; Dykes, T.; Krokos, M.; Dolag, K.
2014-07-01
Splotch is a rendering algorithm for exploration and visual discovery in particle-based datasets coming from astronomical observations or numerical simulations. The strengths of the approach are production of high quality imagery and support for very large-scale datasets through an effective mix of the OpenMP and MPI parallel programming paradigms. This article reports our experiences in re-designing Splotch for exploiting emerging HPC architectures nowadays increasingly populated with GPUs. A performance model is introduced to guide our re-factoring of Splotch. A number of parallelization issues are discussed, in particular relating to race conditions and workload balancing, towards achieving optimal performances. Our implementation was accomplished by using the CUDA programming paradigm. Our strategy is founded on novel schemes achieving optimized data organization and classification of particles. We deploy a reference cosmological simulation to present performance results on acceleration gains and scalability. We finally outline our vision for future work developments including possibilities for further optimizations and exploitation of hybrid systems and emerging accelerators.
Impact of anthropogenic aerosols on regional climate change in Beijing, China
NASA Astrophysics Data System (ADS)
Zhao, B.; Liou, K. N.; He, C.; Lee, W. L.; Gu, Y.; Li, Q.; Leung, L. R.
2015-12-01
Anthropogenic aerosols affect regional climate significantly through radiative (direct and semi-direct) and indirect effects, but the magnitude of these effects over megacities are subject to large uncertainty. In this study, we evaluated the effects of anthropogenic aerosols on regional climate change in Beijing, China using the online-coupled Weather Research and Forecasting/Chemistry Model (WRF/Chem) with the Fu-Liou-Gu radiation scheme and a spatial resolution of 4km. We further updated this radiation scheme with a geometric-optics surface-wave (GOS) approach for the computation of light absorption and scattering by black carbon (BC) particles in which aggregation shape and internal mixing properties are accounted for. In addition, we incorporated in WRF/Chem a 3D radiative transfer parameterization in conjunction with high-resolution digital data for city buildings and landscape to improve the simulation of boundary-layer, surface solar fluxes and associated sensible/latent heat fluxes. Preliminary simulated meteorological parameters, fine particles (PM2.5) and their chemical components agree well with observational data in terms of both magnitude and spatio-temporal variations. The effects of anthropogenic aerosols, including BC, on radiative forcing, surface temperature, wind speed, humidity, cloud water path, and precipitation are quantified on the basis of simulation results. With several preliminary sensitivity runs, we found that meteorological parameters and aerosol radiative effects simulated with the incorporation of improved BC absorption and 3-D radiation parameterizations deviate substantially from simulation results using the conventional homogeneous/core-shell configuration for BC and the plane-parallel model for radiative transfer. Understanding of the aerosol effects on regional climate change over megacities must consider the complex shape and mixing state of aerosol aggregates and 3D radiative transfer effects over city landscape.
Improving z-tracking accuracy in the two-photon single-particle tracking microscope.
Liu, C; Liu, Y-L; Perillo, E P; Jiang, N; Dunn, A K; Yeh, H-C
2015-10-12
Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we have precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico . Our method can be generally applied to other 3D single-particle tracking techniques.
Martinez-Pedrero, Fernando; Massana-Cid, Helena; Ziegler, Till; Johansen, Tom H; Straube, Arthur V; Tierno, Pietro
2016-09-29
We demonstrate a size sensitive experimental scheme which enables bidirectional transport and fractionation of paramagnetic colloids in a fluid medium. It is shown that two types of magnetic colloidal particles with different sizes can be simultaneously transported in opposite directions, when deposited above a stripe-patterned ferrite garnet film subjected to a square-wave magnetic modulation. Due to their different sizes, the particles are located at distinct elevations above the surface, and they experience two different energy landscapes, generated by the modulated magnetic substrate. By combining theoretical arguments and numerical simulations, we reveal such energy landscapes, which fully explain the bidirectional transport mechanism. The proposed technique does not require pre-imposed channel geometries such as in conventional microfluidics or lab-on-a-chip systems, and permits remote control over the particle motion, speed and trajectory, by using relatively low intense magnetic fields.
Improving z-tracking accuracy in the two-photon single-particle tracking microscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C.; Liu, Y.-L.; Perillo, E. P.
Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we havemore » precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico. Our method can be generally applied to other 3D single-particle tracking techniques.« less
Information criteria for quantifying loss of reversibility in parallelized KMC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gourgoulias, Konstantinos, E-mail: gourgoul@math.umass.edu; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Rey-Bellet, Luc, E-mail: luc@math.umass.edu
Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot bemore » computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.« less
Information criteria for quantifying loss of reversibility in parallelized KMC
NASA Astrophysics Data System (ADS)
Gourgoulias, Konstantinos; Katsoulakis, Markos A.; Rey-Bellet, Luc
2017-01-01
Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot be computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.
Charged-particle motion in multidimensional magnetic-field turbulence
NASA Technical Reports Server (NTRS)
Giacalone, J.; Jokipii, J. R.
1994-01-01
We present a new analysis of the fundamental physics of charged-particle motion in a turbulent magnetic field using a numerical simulation. The magnetic field fluctuations are taken to be static and to have a power spectrum which is Kolmogorov. The charged particles are treated as test particles. It is shown that when the field turbulence is independent of one coordinate (i.e., k lies in a plane), the motion of these particles across the magnetic field is essentially zero, as required by theory. Consequently, the only motion across the average magnetic field direction that is allowed is that due to field-line random walk. On the other hand, when a fully three-dimensional realization of the turbulence is considered, the particles readily cross the field. Transport coefficients both along and across the ambient magnetic field are computed. This scheme provides a direct computation of the Fokker-Planck coefficients based on the motions of individual particles, and allows for comparison with analytic theory.
How to assess the impact of a physical parameterization in simulations of moist convection?
NASA Astrophysics Data System (ADS)
Grabowski, Wojciech
2017-04-01
A numerical model capable in simulating moist convection (e.g., cloud-resolving model or large-eddy simulation model) consists of a fluid flow solver combined with required representations (i.e., parameterizations) of physical processes. The later typically include cloud microphysics, radiative transfer, and unresolved turbulent transport. Traditional approaches to investigate impacts of such parameterizations on convective dynamics involve parallel simulations with different parameterization schemes or with different scheme parameters. Such methodologies are not reliable because of the natural variability of a cloud field that is affected by the feedback between the physics and dynamics. For instance, changing the cloud microphysics typically leads to a different realization of the cloud-scale flow, and separating dynamical and microphysical impacts is difficult. This presentation will present a novel modeling methodology, the piggybacking, that allows studying the impact of a physical parameterization on cloud dynamics with confidence. The focus will be on the impact of cloud microphysics parameterization. Specific examples of the piggybacking approach will include simulations concerning the hypothesized deep convection invigoration in polluted environments, the validity of the saturation adjustment in modeling condensation in moist convection, and separation of physical impacts from statistical uncertainty in simulations applying particle-based Lagrangian microphysics, the super-droplet method.
NASA Technical Reports Server (NTRS)
Iguchi, Takamichi; Nakajima, Teruyuki; Khain, Alexander P.; Saito, Kazuo; Takemura, Toshihiko; Okamoto, Hajime; Nishizawa, Tomoaki; Tao, Wei-Kuo
2012-01-01
Numerical weather prediction (NWP) simulations using the Japan Meteorological Agency NonhydrostaticModel (JMA-NHM) are conducted for three precipitation events observed by shipborne or spaceborneW-band cloud radars. Spectral bin and single-moment bulk cloud microphysics schemes are employed separatelyfor an intercomparative study. A radar product simulator that is compatible with both microphysicsschemes is developed to enable a direct comparison between simulation and observation with respect to theequivalent radar reflectivity factor Ze, Doppler velocity (DV), and path-integrated attenuation (PIA). Ingeneral, the bin model simulation shows better agreement with the observed data than the bulk modelsimulation. The correction of the terminal fall velocities of snowflakes using those of hail further improves theresult of the bin model simulation. The results indicate that there are substantial uncertainties in the masssizeand sizeterminal fall velocity relations of snowflakes or in the calculation of terminal fall velocity of snowaloft. For the bulk microphysics, the overestimation of Ze is observed as a result of a significant predominanceof snow over cloud ice due to substantial deposition growth directly to snow. The DV comparison shows thata correction for the fall velocity of hydrometeors considering a change of particle size should be introducedeven in single-moment bulk cloud microphysics.
NASA Astrophysics Data System (ADS)
Dalichaouch, Thamine; Davidson, Asher; Xu, Xinlu; Yu, Peicheng; Tsung, Frank; Mori, Warren; Li, Fei; Zhang, Chaojie; Lu, Wei; Vieira, Jorge; Fonseca, Ricardo
2016-10-01
In the past few decades, there has been much progress in theory, simulation, and experiment towards using Laser wakefield acceleration (LWFA) as the basis for designing and building compact x-ray free-electron-lasers (XFEL) as well as a next generation linear collider. Recently, ionization injection and density downramp injection have been proposed and demonstrated as a controllable injection scheme for creating higher quality and ultra-bright relativistic electron beams using LWFA. However, full-3D simulations of plasma-based accelerators are computationally intensive, sometimes taking 100 millions of core-hours on today's computers. A more efficient quasi-3D algorithm was developed and implemented into OSIRIS using a particle-in-cell description with a charge conserving current deposition scheme in r - z and a gridless Fourier expansion in ϕ. Due to the azimuthal symmetry in LWFA, quasi-3D simulations are computationally more efficient than 3D cartesian simulations since only the first few harmonics in are needed ϕ to capture the 3D physics of LWFA. Using the quasi-3D approach, we present preliminary results of ionization and down ramp triggered injection and compare the results against 3D LWFA simulations. This work was supported by DOE and NSF.
NASA Technical Reports Server (NTRS)
Chen, Y. S.; Farmer, R. C.
1992-01-01
A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.
NASA Astrophysics Data System (ADS)
Chen, Guangye; Chacon, Luis
2015-11-01
We discuss a new, conservative, fully implicit 2D3V Vlasov-Darwin particle-in-cell algorithm in curvilinear geometry for non-radiative, electromagnetic kinetic plasma simulations. Unlike standard explicit PIC schemes, fully implicit PIC algorithms are unconditionally stable and allow exact discrete energy and charge conservation. Here, we extend these algorithms to curvilinear geometry. The algorithm retains its exact conservation properties in curvilinear grids. The nonlinear iteration is effectively accelerated with a fluid preconditioner for weakly to modestly magnetized plasmas, which allows efficient use of large timesteps, O (√{mi/me}c/veT) larger than the explicit CFL. In this presentation, we will introduce the main algorithmic components of the approach, and demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 1D (slow shock) and 2D (island coalescense).
Particle-In-Cell (PIC) simulation of long-anode magnetron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verma, Rajendra Kumar, E-mail: rajendra.verma89@gmail.com; Maurya, Shivendra; Singh, Vindhyavasini Prasad
Long Anode Magnetron (LAM) is a design scheme adopted to attain greater thermal stability and higher power levels for the conventional magnetrons. So a LAM for 5MW Power level at 2.858 GHz was ‘Virtual Prototyped’ using Admittance Matching field theory (AMT) andthen a PIC Study (Beam-wave interaction) was conducted using CST Particle Studio (CST-PS) which is explained in this paper. The convincing results thus obtained were – hot resonant frequency of 2.834 GHz. Output power of 5 MW at beam voltage of 58kV and applied magnetic field of 2200 Gauss with an overall efficiency of 45%. The simulated parameters values on comparisonmore » with the E2V LAM tube (M5028) were in good agreement which validates the feasibility of the design approach.« less
Multi-phase SPH modelling of violent hydrodynamics on GPUs
NASA Astrophysics Data System (ADS)
Mokos, Athanasios; Rogers, Benedict D.; Stansby, Peter K.; Domínguez, José M.
2015-11-01
This paper presents the acceleration of multi-phase smoothed particle hydrodynamics (SPH) using a graphics processing unit (GPU) enabling large numbers of particles (10-20 million) to be simulated on just a single GPU card. With novel hardware architectures such as a GPU, the optimum approach to implement a multi-phase scheme presents some new challenges. Many more particles must be included in the calculation and there are very different speeds of sound in each phase with the largest speed of sound determining the time step. This requires efficient computation. To take full advantage of the hardware acceleration provided by a single GPU for a multi-phase simulation, four different algorithms are investigated: conditional statements, binary operators, separate particle lists and an intermediate global function. Runtime results show that the optimum approach needs to employ separate cell and neighbour lists for each phase. The profiler shows that this approach leads to a reduction in both memory transactions and arithmetic operations giving significant runtime gains. The four different algorithms are compared to the efficiency of the optimised single-phase GPU code, DualSPHysics, for 2-D and 3-D simulations which indicate that the multi-phase functionality has a significant computational overhead. A comparison with an optimised CPU code shows a speed up of an order of magnitude over an OpenMP simulation with 8 threads and two orders of magnitude over a single thread simulation. A demonstration of the multi-phase SPH GPU code is provided by a 3-D dam break case impacting an obstacle. This shows better agreement with experimental results than an equivalent single-phase code. The multi-phase GPU code enables a convergence study to be undertaken on a single GPU with a large number of particles that otherwise would have required large high performance computing resources.
Scalable Metropolis Monte Carlo for simulation of hard shapes
NASA Astrophysics Data System (ADS)
Anderson, Joshua A.; Eric Irrgang, M.; Glotzer, Sharon C.
2016-07-01
We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block-level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan.
A point particle model of lightly bound skyrmions
NASA Astrophysics Data System (ADS)
Gillard, Mike; Harland, Derek; Kirk, Elliot; Maybee, Ben; Speight, Martin
2017-04-01
A simple model of the dynamics of lightly bound skyrmions is developed in which skyrmions are replaced by point particles, each carrying an internal orientation. The model accounts well for the static energy minimizers of baryon number 1 ≤ B ≤ 8 obtained by numerical simulation of the full field theory. For 9 ≤ B ≤ 23, a large number of static solutions of the point particle model are found, all closely resembling size B subsets of a face centred cubic lattice, with the particle orientations dictated by a simple colouring rule. Rigid body quantization of these solutions is performed, and the spin and isospin of the corresponding ground states extracted. As part of the quantization scheme, an algorithm to compute the symmetry group of an oriented point cloud, and to determine its corresponding Finkelstein-Rubinstein constraints, is devised.
NASA Astrophysics Data System (ADS)
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
Simulation of bipolar charge transport in nanocomposite polymer films
NASA Astrophysics Data System (ADS)
Lean, Meng H.; Chu, Wei-Ping L.
2015-03-01
This paper describes 3D particle-in-cell simulation of bipolar charge injection and transport through nanocomposite film comprised of ferroelectric ceramic nanofillers in an amorphous polymer matrix. The classical electrical double layer (EDL) model for a monopolar core is extended (eEDL) to represent the nanofiller by replacing it with a dipolar core. Charge injection at the electrodes assumes metal-polymer Schottky emission at low to moderate fields and Fowler-Nordheim tunneling at high fields. Injected particles migrate via field-dependent Poole-Frenkel mobility and recombine with Monte Carlo selection. The simulation algorithm uses a boundary integral equation method for solution of the Poisson equation coupled with a second-order predictor-corrector scheme for robust time integration of the equations of motion. The stability criterion of the explicit algorithm conforms to the Courant-Friedrichs-Levy limit assuring robust and rapid convergence. The model is capable of simulating a wide dynamic range spanning leakage current to pre-breakdown. Simulation results for BaTiO3 nanofiller in amorphous polymer matrix indicate that charge transport behavior depend on nanoparticle polarization with anti-parallel orientation showing the highest leakage conduction and therefore lowest level of charge trapping in the interaction zone. Charge recombination is also highest, at the cost of reduced leakage conduction charge. The eEDL model predicts the meandering pathways of charge particle trajectories.
NASA Technical Reports Server (NTRS)
Mann, G. W.; Carslaw, K. S.; Reddington, C. L.; Pringle, K. J.; Schulz, M.; Asmi, A.; Spracklen, D. V.; Ridley, D. A.; Woodhouse, M. T.; Lee, L. A.;
2014-01-01
Many of the next generation of global climate models will include aerosol schemes which explicitly simulate the microphysical processes that determine the particle size distribution. These models enable aerosol optical properties and cloud condensation nuclei (CCN) concentrations to be determined by fundamental aerosol processes, which should lead to a more physically based simulation of aerosol direct and indirect radiative forcings. This study examines the global variation in particle size distribution simulated by 12 global aerosol microphysics models to quantify model diversity and to identify any common biases against observations. Evaluation against size distribution measurements from a new European network of aerosol supersites shows that the mean model agrees quite well with the observations at many sites on the annual mean, but there are some seasonal biases common to many sites. In particular, at many of these European sites, the accumulation mode number concentration is biased low during winter and Aitken mode concentrations tend to be overestimated in winter and underestimated in summer. At high northern latitudes, the models strongly underpredict Aitken and accumulation particle concentrations compared to the measurements, consistent with previous studies that have highlighted the poor performance of global aerosol models in the Arctic. In the marine boundary layer, the models capture the observed meridional variation in the size distribution, which is dominated by the Aitken mode at high latitudes, with an increasing concentration of accumulation particles with decreasing latitude. Considering vertical profiles, the models reproduce the observed peak in total particle concentrations in the upper troposphere due to new particle formation, although modelled peak concentrations tend to be biased high over Europe. Overall, the multimodel- mean data set simulates the global variation of the particle size distribution with a good degree of skill, suggesting that most of the individual global aerosol microphysics models are performing well, although the large model diversity indicates that some models are in poor agreement with the observations. Further work is required to better constrain size-resolved primary and secondary particle number sources, and an improved understanding of nucleation an growth (e.g. the role of nitrate and secondary organics) will improve the fidelity of simulated particle size distributions.
Testing hydrodynamics schemes in galaxy disc simulations
NASA Astrophysics Data System (ADS)
Few, C. G.; Dobbs, C.; Pettitt, A.; Konstandin, L.
2016-08-01
We examine how three fundamentally different numerical hydrodynamics codes follow the evolution of an isothermal galactic disc with an external spiral potential. We compare an adaptive mesh refinement code (RAMSES), a smoothed particle hydrodynamics code (SPHNG), and a volume-discretized mesh-less code (GIZMO). Using standard refinement criteria, we find that RAMSES produces a disc that is less vertically concentrated and does not reach such high densities as the SPHNG or GIZMO runs. The gas surface density in the spiral arms increases at a lower rate for the RAMSES simulations compared to the other codes. There is also a greater degree of substructure in the SPHNG and GIZMO runs and secondary spiral arms are more pronounced. By resolving the Jeans length with a greater number of grid cells, we achieve more similar results to the Lagrangian codes used in this study. Other alterations to the refinement scheme (adding extra levels of refinement and refining based on local density gradients) are less successful in reducing the disparity between RAMSES and SPHNG/GIZMO. Although more similar, SPHNG displays different density distributions and vertical mass profiles to all modes of GIZMO (including the smoothed particle hydrodynamics version). This suggests differences also arise which are not intrinsic to the particular method but rather due to its implementation. The discrepancies between codes (in particular, the densities reached in the spiral arms) could potentially result in differences in the locations and time-scales for gravitational collapse, and therefore impact star formation activity in more complex galaxy disc simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, Guozhang; Xiang, Nong; Huang, Yueheng
2016-01-15
The propagation and mode conversion of lower hybrid waves in an inhomogeneous plasma are investigated by using the nonlinear δf algorithm in a two-dimensional particle-in-cell simulation code based on the gyrokinetic electron and fully kinetic ion (GeFi) scheme [Lin et al., Plasma Phys. Controlled Fusion 47, 657 (2005)]. The characteristics of the simulated waves, such as wavelength, frequency, phase, and group velocities, agree well with the linear theoretical analysis. It is shown that a significant reflection component emerges in the conversion process between the slow mode and the fast mode when the scale length of the density variation is comparablemore » to the local wavelength. The dependences of the reflection coefficient on the scale length of the density variation are compared with the results based on the linear full wave model for cold plasmas. It is indicated that the mode conversion for the waves with a frequency of 2.45 GHz (ω ∼ 3ω{sub LH}, where ω{sub LH} represents the lower hybrid resonance) and within Tokamak relevant amplitudes can be well described in the linear scheme. As the frequency decreases, the modification due to the nonlinear term becomes important. For the low-frequency waves (ω ∼ 1.3ω{sub LH}), the generations of the high harmonic modes and sidebands through nonlinear mode-mode coupling provide new power channels and thus could reduce the reflection significantly.« less
COMPARISON OF NUMERICAL SCHEMES FOR SOLVING A SPHERICAL PARTICLE DIFFUSION EQUATION
A new robust iterative numerical scheme was developed for a nonlinear diffusive model that described sorption dynamics in spherical particle suspensions. he numerical scheme had been applied to finite difference and finite element models that showed rapid convergence and stabilit...
SPH simulation of free surface flow over a sharp-crested weir
NASA Astrophysics Data System (ADS)
Ferrari, Angela
2010-03-01
In this paper the numerical simulation of a free surface flow over a sharp-crested weir is presented. Since in this case the usual shallow water assumptions are not satisfied, we propose to solve the problem using the full weakly compressible Navier-Stokes equations with the Tait equation of state for water. The numerical method used consists of the new meshless Smooth Particle Hydrodynamics (SPH) formulation proposed by Ferrari et al. (2009) [8], that accurately tracks the free surface profile and provides monotone pressure fields. Thus, the unsteady evolution of the complex moving material interface (free surface) can been properly solved. The simulations involving about half a million of fluid particles have been run in parallel on two of the most powerful High Performance Computing (HPC) facilities in Europe. The validation of the results has been carried out analysing the pressure field and comparing the free surface profiles obtained with the SPH scheme with experimental measurements available in literature [18]. A very good quantitative agreement has been obtained.
Gyrokinetic simulation of ITG modes in a three-mode coupling model
NASA Astrophysics Data System (ADS)
Jenkins, Thomas G.; Lee, W. W.
2004-11-01
A three-mode coupling model of ITG modes with adiabatic electrons is studied both analytically and numerically in 2-dimensional slab geometry using the gyrokinetic formalism. It can be shown analytically that the (quasilinear) saturation amplitude of the waves in the system should be enhanced by the inclusion of the parallel velocity nonlinearity in the governing gyrokinetic equation. The effect of this (frequently neglected) nonlinearity on the steady-state transport properties of the plasma is studied numerically using standard gyrokinetic particle simulation techniques. The balance [1] between various steady-state transport properties of the model (particle and heat flux, entropy production, and collisional dissipation) is examined. Effects resulting from the inclusion of nonadiabatic electrons in the model are also considered numerically, making use of the gyrokinetic split-weight scheme [2] in the simulations. [1] W. W. Lee and W. M. Tang, Phys. Fluids 31, 612 (1988). [2] I. Manuilskiy and W. W. Lee, Phys. Plasmas 7, 1381 (2000).
NASA Astrophysics Data System (ADS)
Luo, Liping; Xue, Ming; Zhu, Kefeng; Zhou, Bowen
2017-07-01
In the late afternoon of 19 March 2014, a severe hailstorm swept through eastern central Zhejiang province, China. The storm produced golf ball-sized hail, strong winds, and lighting, lasting approximately 1 h over the coastal city of Taizhou. The Advanced Regional Prediction System is used to simulate the hailstorm using different configurations of the Milbrandt-Yau microphysics scheme that predict one, two, or three moments of the hydrometeor particle size distribution. Simulated fields, including accumulated precipitation and maximum estimated hail size (MESH), are verified against rain gauge observations and radar-derived MESH, respectively. For the case of the 19 March 2014 storms, the general evolution is better predicted with multimoment microphysics schemes than with the one-moment scheme; the three-moment scheme produces the best forecast. Predictions from the three-moment scheme qualitatively agree with observations in terms of size and amount of hail reaching the surface. The life cycle of the hailstorm is analyzed, using the most skillful, three-moment forecast. Based upon the tendency of surface hail mass flux, the hailstorm life cycle can be divided into three stages: developing, mature, and dissipating. Microphysical budget analyses are used to examine microphysical processes and characteristics during these three stages. The vertical structures within the storm and their link to environmental shear conditions are discussed; together with the rapid fall of hailstones, these structures and conditions appear to dictate this pulse storm's short life span. Finally, a conceptual model for the life cycle of pulse hailstorms is proposed.
Lattice-Assisted Spectroscopy: A Generalized Scanning Tunneling Microscope for Ultracold Atoms.
Kantian, A; Schollwöck, U; Giamarchi, T
2015-10-16
We propose a scheme to measure the frequency-resolved local particle and hole spectra of any optical lattice-confined system of correlated ultracold atoms that offers single-site addressing and imaging, which is now an experimental reality. Combining perturbation theory and time-dependent density matrix renormalization group simulations, we quantitatively test and validate this approach of lattice-assisted spectroscopy on several one-dimensional example systems, such as the superfluid and Mott insulator, with and without a parabolic trap, and finally on edge states of the bosonic Su-Schrieffer-Heeger model. We highlight extensions of our basic scheme to obtain an even wider variety of interesting and important frequency resolved spectra.
Stochastic Rotation Dynamics simulations of wetting multi-phase flows
NASA Astrophysics Data System (ADS)
Hiller, Thomas; Sanchez de La Lama, Marta; Brinkmann, Martin
2016-06-01
Multi-color Stochastic Rotation Dynamics (SRDmc) has been introduced by Inoue et al. [1,2] as a particle based simulation method to study the flow of emulsion droplets in non-wetting microchannels. In this work, we extend the multi-color method to also account for different wetting conditions. This is achieved by assigning the color information not only to fluid particles but also to virtual wall particles that are required to enforce proper no-slip boundary conditions. To extend the scope of the original SRDmc algorithm to e.g. immiscible two-phase flow with viscosity contrast we implement an angular momentum conserving scheme (SRD+mc). We perform extensive benchmark simulations to show that a mono-phase SRDmc fluid exhibits bulk properties identical to a standard SRD fluid and that SRDmc fluids are applicable to a wide range of immiscible two-phase flows. To quantify the adhesion of a SRD+mc fluid in contact to the walls we measure the apparent contact angle from sessile droplets in mechanical equilibrium. For a further verification of our wettability implementation we compare the dewetting of a liquid film from a wetting stripe to experimental and numerical studies of interfacial morphologies on chemically structured surfaces.
NASA Astrophysics Data System (ADS)
Huang, Xia; Li, Chunqiang; Xiao, Chuan; Sun, Wenqing; Qian, Wei
2017-03-01
The temporal focusing two-photon microscope (TFM) is developed to perform depth resolved wide field fluorescence imaging by capturing frames sequentially. However, due to strong nonignorable noises and diffraction rings surrounding particles, further researches are extremely formidable without a precise particle localization technique. In this paper, we developed a fully-automated scheme to locate particles positions with high noise tolerance. Our scheme includes the following procedures: noise reduction using a hybrid Kalman filter method, particle segmentation based on a multiscale kernel graph cuts global and local segmentation algorithm, and a kinematic estimation based particle tracking method. Both isolated and partial-overlapped particles can be accurately identified with removal of unrelated pixels. Based on our quantitative analysis, 96.22% isolated particles and 84.19% partial-overlapped particles were successfully detected.
Model-independent particle accelerator tuning
Scheinker, Alexander; Pang, Xiaoying; Rybarcyk, Larry
2013-10-21
We present a new model-independent dynamic feedback technique, rotation rate tuning, for automatically and simultaneously tuning coupled components of uncertain, complex systems. The main advantages of the method are: 1) It has the ability to handle unknown, time-varying systems, 2) It gives known bounds on parameter update rates, 3) We give an analytic proof of its convergence and its stability, and 4) It has a simple digital implementation through a control system such as the Experimental Physics and Industrial Control System (EPICS). Because this technique is model independent it may be useful as a real-time, in-hardware, feedback-based optimization scheme formore » uncertain and time-varying systems. In particular, it is robust enough to handle uncertainty due to coupling, thermal cycling, misalignments, and manufacturing imperfections. As a result, it may be used as a fine-tuning supplement for existing accelerator tuning/control schemes. We present multi-particle simulation results demonstrating the scheme’s ability to simultaneously adaptively adjust the set points of twenty two quadrupole magnets and two RF buncher cavities in the Los Alamos Neutron Science Center Linear Accelerator’s transport region, while the beam properties and RF phase shift are continuously varying. The tuning is based only on beam current readings, without knowledge of particle dynamics. We also present an outline of how to implement this general scheme in software for optimization, and in hardware for feedback-based control/tuning, for a wide range of systems.« less
Coulomb interactions in charged fluids.
Vernizzi, Graziano; Guerrero-García, Guillermo Iván; de la Cruz, Monica Olvera
2011-07-01
The use of Ewald summation schemes for calculating long-range Coulomb interactions, originally applied to ionic crystalline solids, is a very common practice in molecular simulations of charged fluids at present. Such a choice imposes an artificial periodicity which is generally absent in the liquid state. In this paper we propose a simple analytical O(N(2)) method which is based on Gauss's law for computing exactly the Coulomb interaction between charged particles in a simulation box, when it is averaged over all possible orientations of a surrounding infinite lattice. This method mitigates the periodicity typical of crystalline systems and it is suitable for numerical studies of ionic liquids, charged molecular fluids, and colloidal systems with Monte Carlo and molecular dynamics simulations.
Particle-in-cell simulations on graphic processing units
NASA Astrophysics Data System (ADS)
Ren, C.; Zhou, X.; Li, J.; Huang, M. C.; Zhao, Y.
2014-10-01
We will show our recent progress in using GPU's to accelerate the PIC code OSIRIS [Fonseca et al. LNCS 2331, 342 (2002)]. The OISRIS parallel structure is retained and the computation-intensive kernels are shipped to GPU's. Algorithms for the kernels are adapted for the GPU, including high-order charge-conserving current deposition schemes with few branching and parallel particle sorting [Kong et al., JCP 230, 1676 (2011)]. These algorithms make efficient use of the GPU shared memory. This work was supported by U.S. Department of Energy under Grant No. DE-FC02-04ER54789 and by NSF under Grant No. PHY-1314734.
Build Your Own Particle Smasher: The Royal Society Partnership Grants Scheme
ERIC Educational Resources Information Center
Education in Science, 2012
2012-01-01
This article features the project, "Build Your Own Particle Smasher" and shares how to build a particle smasher project. A-level and AS-level students from Trinity Catholic School have built their own particle smashers, in collaboration with Nottingham Trent University, as part of The Royal Society's Partnership Grants Scheme. The…
Size distributions of secondary and primary aerosols in Asia: A 3-D modeling
NASA Astrophysics Data System (ADS)
Yu, F.; Luo, G.; Wang, Z.
2009-12-01
Asian aerosols have received increasing attention because of their potential health and climate effects and the rapid increasing of Asian emissions associated with accelerating economic expansion. Aerosol particles appear in the atmosphere due to either in-situ nucleation (i.e, secondary particles) or direct emissions (i.e., primary particles), and their environmental impacts depend strongly on their concentrations, sizes, compositions, and mixing states. A size-resolved (sectional) particle microphysics model with a number of computationally efficient schemes has been incorporated into a global chemistry transport model (GEOS-Chem) to simulate the number size distributions of secondary and primary particles in the troposphere (Yu and Luo, Atmos. Chem. Phys. Discuss., 9, 10597-10645, 2009). The growth of nucleated particles through the condensation of sulfuric acid vapor and equilibrium uptake of nitrate, ammonium, and secondary organic aerosol is explicitly simulated, along with the coating of primary particles (dust, black carbon, organic carbon, and sea salt) by volatile components via condensation and coagulation with secondary particles. Here we look into the spatiotemporal variations of the size distributions of secondary and primary aerosols in Asia. The annual mean number concentration of the accumulation mode particles (dry diameter > ~ 100 nm) in the lower troposphere over Asia (especially China) is very high and is dominated (~70-90%) by carbonaceous primary particles (with coated condensable species). Coagulation and condensation turn the primary particles into mixed particles and on average increase the dry sizes of primary particles by a factor of ~ 2-2.5. Despite of high condensation sink, sulfuric acid vapor concentration in many parts of Asian low troposphere is very high (annual mean values above 1E7/cm3) and significant new particle formation still occurs. Secondary particles generally dominate the particles small than 100 nm and the equilibrium uptake of nitrate, ammonium, and secondary organic aerosol contributes significantly to the growth of these particles. The vertical profiles of particle number size distributions at representative locations show significant spatial variations (both horizontally and vertically). Our simulations also indicate substantial seasonal variations of particle size distributions.
Blind Quantum Signature with Controlled Four-Particle Cluster States
NASA Astrophysics Data System (ADS)
Li, Wei; Shi, Jinjing; Shi, Ronghua; Guo, Ying
2017-08-01
A novel blind quantum signature scheme based on cluster states is introduced. Cluster states are a type of multi-qubit entangled states and it is more immune to decoherence than other entangled states. The controlled four-particle cluster states are created by acting controlled-Z gate on particles of four-particle cluster states. The presented scheme utilizes the above entangled states and simplifies the measurement basis to generate and verify the signature. Security analysis demonstrates that the scheme is unconditional secure. It can be employed to E-commerce systems in quantum scenario.
Stochastic dynamics of virus capsid formation: direct versus hierarchical self-assembly
2012-01-01
Background In order to replicate within their cellular host, many viruses have developed self-assembly strategies for their capsids which are sufficiently robust as to be reconstituted in vitro. Mathematical models for virus self-assembly usually assume that the bonds leading to cluster formation have constant reactivity over the time course of assembly (direct assembly). In some cases, however, binding sites between the capsomers have been reported to be activated during the self-assembly process (hierarchical assembly). Results In order to study possible advantages of such hierarchical schemes for icosahedral virus capsid assembly, we use Brownian dynamics simulations of a patchy particle model that allows us to switch binding sites on and off during assembly. For T1 viruses, we implement a hierarchical assembly scheme where inter-capsomer bonds become active only if a complete pentamer has been assembled. We find direct assembly to be favorable for reversible bonds allowing for repeated structural reorganizations, while hierarchical assembly is favorable for strong bonds with small dissociation rate, as this situation is less prone to kinetic trapping. However, at the same time it is more vulnerable to monomer starvation during the final phase. Increasing the number of initial monomers does have only a weak effect on these general features. The differences between the two assembly schemes become more pronounced for more complex virus geometries, as shown here for T3 viruses, which assemble through homogeneous pentamers and heterogeneous hexamers in the hierarchical scheme. In order to complement the simulations for this more complicated case, we introduce a master equation approach that agrees well with the simulation results. Conclusions Our analysis shows for which molecular parameters hierarchical assembly schemes can outperform direct ones and suggests that viruses with high bond stability might prefer hierarchical assembly schemes. These insights increase our physical understanding of an essential biological process, with many interesting potential applications in medicine and materials science. PMID:23244740
NASA Astrophysics Data System (ADS)
Sun, Dan; Garmory, Andrew; Page, Gary J.
2017-02-01
For flows where the particle number density is low and the Stokes number is relatively high, as found when sand or ice is ingested into aircraft gas turbine engines, streams of particles can cross each other's path or bounce from a solid surface without being influenced by inter-particle collisions. The aim of this work is to develop an Eulerian method to simulate these types of flow. To this end, a two-node quadrature-based moment method using 13 moments is proposed. In the proposed algorithm thirteen moments of particle velocity, including cross-moments of second order, are used to determine the weights and abscissas of the two nodes and to set up the association between the velocity components in each node. Previous Quadrature Method of Moments (QMOM) algorithms either use more than two nodes, leading to increased computational expense, or are shown here to give incorrect results under some circumstances. This method gives the computational efficiency advantages of only needing two particle phase velocity fields whilst ensuring that a correct combination of weights and abscissas is returned for any arbitrary combination of particle trajectories without the need for any further assumptions. Particle crossing and wall bouncing with arbitrary combinations of angles are demonstrated using the method in a two-dimensional scheme. The ability of the scheme to include the presence of drag from a carrier phase is also demonstrated, as is bouncing off surfaces with inelastic collisions. The method is also applied to the Taylor-Green vortex flow test case and is found to give results superior to the existing two-node QMOM method and is in good agreement with results from Lagrangian modelling of this case.
Quantum Tasks with Non-maximally Quantum Channels via Positive Operator-Valued Measurement
NASA Astrophysics Data System (ADS)
Peng, Jia-Yin; Luo, Ming-Xing; Mo, Zhi-Wen
2013-01-01
By using a proper positive operator-valued measure (POVM), we present two new schemes for probabilistic transmission with non-maximally four-particle cluster states. In the first scheme, we demonstrate that two non-maximally four-particle cluster states can be used to realize probabilistically sharing an unknown three-particle GHZ-type state within either distant agent's place. In the second protocol, we demonstrate that a non-maximally four-particle cluster state can be used to teleport an arbitrary unknown multi-particle state in a probabilistic manner with appropriate unitary operations and POVM. Moreover the total success probability of these two schemes are also worked out.
Counterfactual entanglement distribution without transmitting any particles.
Guo, Qi; Cheng, Liu-Yong; Chen, Li; Wang, Hong-Fu; Zhang, Shou
2014-04-21
To date, all schemes for entanglement distribution needed to send entangled particles or a separable mediating particle among distant participants. Here, we propose a counterfactual protocol for entanglement distribution against the traditional forms, that is, two distant particles can be entangled with no physical particles travel between the two remote participants. We also present an alternative scheme for realizing the counterfactual photonic entangled state distribution using Michelson-type interferometer and self-assembled GaAs/InAs quantum dot embedded in a optical microcavity. The numerical analysis about the effect of experimental imperfections on the performance of the scheme shows that the entanglement distribution may be implementable with high fidelity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pae, Ki Hong; Kim, Chul Min, E-mail: chulmin@gist.ac.kr; Advanced Photonics Research Institute, Gwangju Institute of Science and Technology, Gwangju 61005
In laser-driven proton acceleration, generation of quasi-monoenergetic proton beams has been considered a crucial feature of the radiation pressure acceleration (RPA) scheme, but the required difficult physical conditions have hampered its experimental realization. As a method to generate quasi-monoenergetic protons under experimentally viable conditions, we investigated using double-species targets of controlled composition ratio in order to make protons bunched in the phase space in the RPA scheme. From a modified optimum condition and three-dimensional particle-in-cell simulations, we showed by varying the ion composition ratio of proton and carbon that quasi-monoenergetic protons could be generated from ultrathin plane targets irradiated withmore » a circularly polarized Gaussian laser pulse. The proposed scheme should facilitate the experimental realization of ultrashort quasi-monoenergetic proton beams for unique applications in high field science.« less
Multistage Coupling of Laser-Wakefield Accelerators with Curved Plasma Channels.
Luo, J; Chen, M; Wu, W Y; Weng, S M; Sheng, Z M; Schroeder, C B; Jaroszynski, D A; Esarey, E; Leemans, W P; Mori, W B; Zhang, J
2018-04-13
Multistage coupling of laser-wakefield accelerators is essential to overcome laser energy depletion for high-energy applications such as TeV-level electron-positron colliders. Current staging schemes feed subsequent laser pulses into stages using plasma mirrors while controlling electron beam focusing with plasma lenses. Here a more compact and efficient scheme is proposed to realize the simultaneous coupling of the electron beam and the laser pulse into a second stage. A partly curved channel, integrating a straight acceleration stage with a curved transition segment, is used to guide a fresh laser pulse into a subsequent straight channel, while the electrons continue straight. This scheme benefits from a shorter coupling distance and continuous guiding of the electrons in plasma while suppressing transverse beam dispersion. Particle-in-cell simulations demonstrate that the electron beam from a previous stage can be efficiently injected into a subsequent stage for further acceleration while maintaining high capture efficiency, stability, and beam quality.
Multistage Coupling of Laser-Wakefield Accelerators with Curved Plasma Channels
NASA Astrophysics Data System (ADS)
Luo, J.; Chen, M.; Wu, W. Y.; Weng, S. M.; Sheng, Z. M.; Schroeder, C. B.; Jaroszynski, D. A.; Esarey, E.; Leemans, W. P.; Mori, W. B.; Zhang, J.
2018-04-01
Multistage coupling of laser-wakefield accelerators is essential to overcome laser energy depletion for high-energy applications such as TeV-level electron-positron colliders. Current staging schemes feed subsequent laser pulses into stages using plasma mirrors while controlling electron beam focusing with plasma lenses. Here a more compact and efficient scheme is proposed to realize the simultaneous coupling of the electron beam and the laser pulse into a second stage. A partly curved channel, integrating a straight acceleration stage with a curved transition segment, is used to guide a fresh laser pulse into a subsequent straight channel, while the electrons continue straight. This scheme benefits from a shorter coupling distance and continuous guiding of the electrons in plasma while suppressing transverse beam dispersion. Particle-in-cell simulations demonstrate that the electron beam from a previous stage can be efficiently injected into a subsequent stage for further acceleration while maintaining high capture efficiency, stability, and beam quality.
NASA Astrophysics Data System (ADS)
Almansa, Julio; Salvat-Pujol, Francesc; Díaz-Londoño, Gloria; Carnicer, Artur; Lallena, Antonio M.; Salvat, Francesc
2016-02-01
The Fortran subroutine package PENGEOM provides a complete set of tools to handle quadric geometries in Monte Carlo simulations of radiation transport. The material structure where radiation propagates is assumed to consist of homogeneous bodies limited by quadric surfaces. The PENGEOM subroutines (a subset of the PENELOPE code) track particles through the material structure, independently of the details of the physics models adopted to describe the interactions. Although these subroutines are designed for detailed simulations of photon and electron transport, where all individual interactions are simulated sequentially, they can also be used in mixed (class II) schemes for simulating the transport of high-energy charged particles, where the effect of soft interactions is described by the random-hinge method. The definition of the geometry and the details of the tracking algorithm are tailored to optimize simulation speed. The use of fuzzy quadric surfaces minimizes the impact of round-off errors. The provided software includes a Java graphical user interface for editing and debugging the geometry definition file and for visualizing the material structure. Images of the structure are generated by using the tracking subroutines and, hence, they describe the geometry actually passed to the simulation code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Besse, Nicolas; Latu, Guillaume; Ghizzo, Alain
In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strongmore » laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to increase the local character in phase-space of the numerical scheme, by considering multiscale reconstruction with more compact support and by replacing the semi-Lagrangian method with more local - in space - numerical scheme as compact finite difference schemes, discontinuous-Galerkin method or finite element residual schemes which are well suited for parallel domain decomposition techniques.« less
Application of the top-on-top model to 135Pr
NASA Astrophysics Data System (ADS)
Sugawara-Tanabe, Kazuko; Tanabe, Kosai
2017-09-01
It is proved that the Holstein-Primakoff (HP) boson expansion method is very effective for a case where both total and single-particle angular momenta have the diagonal representation along the same direction. The algebraic solution is described by two kinds of quantum numbers classifying the rotational band’s characteristic of the particle-rotor model. One is related with the wobbling motion of the rotor, and the other to the precession of the single-particle angular momentum. Employing angular-momentum dependent rigid (rig) moments of inertia (MoI), which simulate Coriolis anti-pairing effect based on the constrained self-consistent Hartree-Fock-Bogoliubov (HFB) equation, we obtain good fitting not only to the energy-level scheme, but also to the electromagnetic transition rates and the mixing ratio for 135Pr.
Genetic particle filter application to land surface temperature downscaling
NASA Astrophysics Data System (ADS)
Mechri, Rihab; Ottlé, Catherine; Pannekoucke, Olivier; Kallel, Abdelaziz
2014-03-01
Thermal infrared data are widely used for surface flux estimation giving the possibility to assess water and energy budgets through land surface temperature (LST). Many applications require both high spatial resolution (HSR) and high temporal resolution (HTR), which are not presently available from space. It is therefore necessary to develop methodologies to use the coarse spatial/high temporal resolutions LST remote-sensing products for a better monitoring of fluxes at appropriate scales. For that purpose, a data assimilation method was developed to downscale LST based on particle filtering. The basic tenet of our approach is to constrain LST dynamics simulated at both HSR and HTR, through the optimization of aggregated temperatures at the coarse observation scale. Thus, a genetic particle filter (GPF) data assimilation scheme was implemented and applied to a land surface model which simulates prior subpixel temperatures. First, the GPF downscaling scheme was tested on pseudoobservations generated in the framework of the study area landscape (Crau-Camargue, France) and climate for the year 2006. The GPF performances were evaluated against observation errors and temporal sampling. Results show that GPF outperforms prior model estimations. Finally, the GPF method was applied on Spinning Enhanced Visible and InfraRed Imager time series and evaluated against HSR data provided by an Advanced Spaceborne Thermal Emission and Reflection Radiometer image acquired on 26 July 2006. The temperatures of seven land cover classes present in the study area were estimated with root-mean-square errors less than 2.4 K which is a very promising result for downscaling LST satellite products.
Parameterizations of Dry Deposition for the Industrial Source Complex Model
NASA Astrophysics Data System (ADS)
Wesely, M. L.; Doskey, P. V.; Touma, J. S.
2002-05-01
Improved algorithms have been developed to simulate the dry deposition of hazardous air pollutants (HAPs) with the Industrial Source Complex model system. The dry deposition velocities are described in conventional resistance schemes, for which micrometeorological formulas are applied to describe the aerodynamic resistances above the surface. Pathways to uptake of gases at the ground and in vegetative canopies are depicted with several resistances that are affected by variations in air temperature, humidity, solar irradiance, and soil moisture. Standardized land use types and seasonal categories provide sets of resistances to uptake by various components of the surface. To describe the dry deposition of the large number of gaseous organic HAPS, a new technique based on laboratory study results and theoretical considerations has been developed to provide a means to evaluate the role of lipid solubility on uptake by the waxy outer cuticle of vegetative plant leaves. The dry deposition velocities of particulate HAPs are simulated with a resistance scheme in which deposition velocity is described for two size modes: a fine mode with particles less than about 2.5 microns in diameter and a coarse mode with larger particles but excluding very coarse particles larger than about 10 microns in diameter. For the fine mode, the deposition velocity is calculated with a parameterization based on observations of sulfate dry deposition. For the coarse mode, a representative settling velocity is assumed. Then the total deposition velocity is estimated as the sum of the two deposition velocities weighted according to the amount of mass expected in the two modes.
Effect of polarization and focusing on laser pulse driven auto-resonant particle acceleration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sagar, Vikram; Sengupta, Sudip; Kaw, Predhiman
2014-04-15
The effect of laser polarization and focusing is theoretically studied on the final energy gain of a particle in the Auto-resonant acceleration scheme using a finite duration laser pulse with Gaussian shaped temporal envelope. The exact expressions for dynamical variables viz. position, momentum, and energy are obtained by analytically solving the relativistic equation of motion describing particle dynamics in the combined field of an elliptically polarized finite duration pulse and homogeneous static axial magnetic field. From the solutions, it is shown that for a given set of laser parameters viz. intensity and pulse length along with static magnetic field, themore » energy gain by a positively charged particle is maximum for a right circularly polarized laser pulse. Further, a new scheme is proposed for particle acceleration by subjecting it to the combined field of a focused finite duration laser pulse and static axial magnetic field. In this scheme, the particle is initially accelerated by the focused laser field, which drives the non-resonant particle to second stage of acceleration by cyclotron Auto-resonance. The new scheme is found to be efficient over two individual schemes, i.e., auto-resonant acceleration and direct acceleration by focused laser field, as significant particle acceleration can be achieved at one order lesser values of static axial magnetic field and laser intensity.« less
NASA Astrophysics Data System (ADS)
Martinez, R.; Larouche, D.; Cailletaud, G.; Guillot, I.; Massinon, D.
2015-06-01
The precipitation of Al2Cu particles in a 319 T7 aluminum alloy has been modeled. A theoretical approach enables the concomitant computation of nucleation, growth and coarsening. The framework is based on an implicit scheme using the finite differences. The equation of continuity is discretized in time and space in order to obtain a matricial form. The inversion of a tridiagonal matrix gives way to determining the evolution of the size distribution of Al2Cu particles at t +Δt. The fluxes of in-between the boundaries are computed in order to respect the conservation of the mass of the system, as well as the fluxes at the boundaries. The essential results of the model are compared to TEM measurements. Simulations provide quantitative features on the impact of the cooling rate on the size distribution of particles. They also provide results in agreement with the TEM measurements. This kind of multiscale approach allows new perspectives to be examined in the process of designing highly loaded components such as cylinder heads. It enables a more precise prediction of the microstructure and its evolution as a function of continuous cooling rates.
Chen, G.; Chacón, L.
2015-08-11
For decades, the Vlasov–Darwin model has been recognized to be attractive for particle-in-cell (PIC) kinetic plasma simulations in non-radiative electromagnetic regimes, to avoid radiative noise issues and gain computational efficiency. However, the Darwin model results in an elliptic set of field equations that renders conventional explicit time integration unconditionally unstable. We explore a fully implicit PIC algorithm for the Vlasov–Darwin model in multiple dimensions, which overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. The finite-difference scheme for Darwin field equations and particle equations of motion is space–time-centered, employing particle sub-cycling and orbit-averaging. This algorithm conserves total energy, local charge,more » canonical-momentum in the ignorable direction, and preserves the Coulomb gauge exactly. An asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. Finally, we demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 2D–3V.« less
Spectral Kinetic Simulation of the Ideal Multipole Resonance Probe
NASA Astrophysics Data System (ADS)
Gong, Junbo; Wilczek, Sebastian; Szeremley, Daniel; Oberrath, Jens; Eremin, Denis; Dobrygin, Wladislaw; Schilling, Christian; Friedrichs, Michael; Brinkmann, Ralf Peter
2015-09-01
The term Active Plasma Resonance Spectroscopy (APRS) denotes a class of diagnostic techniques which utilize the natural ability of plasmas to resonate on or near the electron plasma frequency ωpe: An RF signal in the GHz range is coupled into the plasma via an electric probe; the spectral response of the plasma is recorded, and a mathematical model is used to determine plasma parameters such as the electron density ne or the electron temperature Te. One particular realization of the method is the Multipole Resonance Probe (MRP). The ideal MRP is a geometrically simplified version of that probe; it consists of two dielectrically shielded, hemispherical electrodes to which the RF signal is applied. A particle-based numerical algorithm is described which enables a kinetic simulation of the interaction of the probe with the plasma. Similar to the well-known particle-in-cell (PIC), it contains of two modules, a particle pusher and a field solver. The Poisson solver determines, with the help of a truncated expansion into spherical harmonics, the new electric field at each particle position directly without invoking a numerical grid. The effort of the scheme scales linearly with the ensemble size N.
Landázuri, Andrea C.; Sáez, A. Eduardo; Anthony, T. Renée
2016-01-01
This work presents fluid flow and particle trajectory simulation studies to determine the aspiration efficiency of a horizontally oriented occupational air sampler using computational fluid dynamics (CFD). Grid adaption and manual scaling of the grids were applied to two sampler prototypes based on a 37-mm cassette. The standard k–ε model was used to simulate the turbulent air flow and a second order streamline-upwind discretization scheme was used to stabilize convective terms of the Navier–Stokes equations. Successively scaled grids for each configuration were created manually and by means of grid adaption using the velocity gradient in the main flow direction. Solutions were verified to assess iterative convergence, grid independence and monotonic convergence. Particle aspiration efficiencies determined for both prototype samplers were undistinguishable, indicating that the porous filter does not play a noticeable role in particle aspiration. Results conclude that grid adaption is a powerful tool that allows to refine specific regions that require lots of detail and therefore better resolve flow detail. It was verified that adaptive grids provided a higher number of locations with monotonic convergence than the manual grids and required the least computational effort. PMID:26949268
NASA Astrophysics Data System (ADS)
Rao, Chengping; Zhang, Youlin; Wan, Decheng
2017-12-01
Fluid-Structure Interaction (FSI) caused by fluid impacting onto a flexible structure commonly occurs in naval architecture and ocean engineering. Research on the problem of wave-structure interaction is important to ensure the safety of offshore structures. This paper presents the Moving Particle Semi-implicit and Finite Element Coupled Method (MPS-FEM) to simulate FSI problems. The Moving Particle Semi-implicit (MPS) method is used to calculate the fluid domain, while the Finite Element Method (FEM) is used to address the structure domain. The scheme for the coupling of MPS and FEM is introduced first. Then, numerical validation and convergent study are performed to verify the accuracy of the solver for solitary wave generation and FSI problems. The interaction between the solitary wave and an elastic structure is investigated by using the MPS-FEM coupled method.
An Adiabatic Phase-Matching Accelerator
Lemery, Francois; Floettmann, Klaus; Piot, Philippe; ...
2018-05-25
We present a general concept to accelerate non-relativistic charged particles. Our concept employs an adiabatically-tapered dielectric-lined waveguide which supports accelerating phase velocities for synchronous acceleration. We propose an ansatz for the transient field equations, show it satisfies Maxwell's equations under an adiabatic approximation and find excellent agreement with a finite-difference time-domain computer simulation. The fields were implemented into the particle-tracking program {\\sc astra} and we present beam dynamics results for an accelerating field with a 1-mm-wavelength and peak electric field of 100~MV/m. The numerical simulations indicate that amore » $$\\sim 200$$-keV electron beam can be accelerated to an energy of $$\\sim10$$~MeV over $$\\sim 10$$~cm. The novel scheme is also found to form electron beams with parameters of interest to a wide range of applications including, e.g., future advanced accelerators, and ultra-fast electron diffraction.« less
Synchronous acceleration with tapered dielectric-lined waveguides
Lemery, Francois; Floettmann, Klaus; Piot, Philippe; ...
2018-05-25
Here, we present a general concept to accelerate non-relativistic charged particles. Our concept employs an adiabatically-tapered dielectric-lined waveguide which supports accelerating phase velocities for synchronous acceleration. We propose an ansatz for the transient field equations, show it satisfies Maxwell's equations under an adiabatic approximation and find excellent agreement with a finite-difference time-domain computer simulation. The fields were implemented into the particle-tracking program {\\sc astra} and we present beam dynamics results for an accelerating field with a 1-mm-wavelength and peak electric field of 100~MV/m. The numerical simulations indicate that amore » $$\\sim 200$$-keV electron beam can be accelerated to an energy of $$\\sim10$$~MeV over $$\\sim 10$$~cm. The novel scheme is also found to form electron beams with parameters of interest to a wide range of applications including, e.g., future advanced accelerators, and ultra-fast electron diffraction.« less
Numerical simulation of fire vortex
NASA Astrophysics Data System (ADS)
Barannikova, D. D.; Borzykh, V. E.; Obukhov, A. G.
2018-05-01
The article considers the numerical simulation of the swirling flow of air around the smoothly heated vertical cylindrical domain in the conditions of gravity and Coriolis forces action. The solutions of the complete system of Navie-Stocks equations are numerically solved at constant viscosity and heat conductivity factors. Along with the proposed initial and boundary conditions, these solutions describe the complex non-stationary 3D flows of viscous compressible heat conducting gas. For various instants of time of the initial flow formation stage using the explicit finite-difference scheme the calculations of all gas dynamics parameters, that is density, temperature, pressure and three velocity components of gas particles, have been run. The current instant lines corresponding to the trajectories of the particles movement in the emerging flow have been constructed. A negative direction of the air flow swirling occurred in the vertical cylindrical domain heating has been defined.
NASA Astrophysics Data System (ADS)
Miyake, Y.; Usui, H.; Kojima, H.; Omura, Y.; Matsumoto, H.
2008-06-01
We have newly developed a numerical tool for the analysis of antenna impedance in plasma environment by making use of electromagnetic Particle-In-Cell (PIC) plasma simulations. To validate the developed tool, we first examined the antenna impedance in a homogeneous kinetic plasma and confirmed that the obtained results basically agree with the conventional theories. We next applied the tool to examine an ion-sheathed dipole antenna. The results confirmed that the inclusion of the ion-sheath effects reduces the capacitance below the electron plasma frequency. The results also revealed that the signature of impedance resonance observed at the plasma frequency is modified by the presence of the sheath. Since the sheath dynamics can be solved by the PIC scheme throughout the antenna analysis in a self-consistent manner, the developed tool has feasibility to perform more practical and complicated antenna analyses that will be necessary in real space missions.
An Adiabatic Phase-Matching Accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lemery, Francois; Floettmann, Klaus; Piot, Philippe
2017-12-22
We present a general concept to accelerate non-relativistic charged particles. Our concept employs an adiabatically-tapered dielectric-lined waveguide which supports accelerating phase velocities for synchronous acceleration. We propose an ansatz for the transient field equations, show it satisfies Maxwell's equations under an adiabatic approximation and find excellent agreement with a finite-difference time-domain computer simulation. The fields were implemented into the particle-tracking program {\\sc astra} and we present beam dynamics results for an accelerating field with a 1-mm-wavelength and peak electric field of 100~MV/m. The numerical simulations indicate that amore » $$\\sim 200$$-keV electron beam can be accelerated to an energy of $$\\sim10$$~MeV over $$\\sim 10$$~cm. The novel scheme is also found to form electron beams with parameters of interest to a wide range of applications including, e.g., future advanced accelerators, and ultra-fast electron diffraction.« less
Modeling of magnetic hystereses in soft MREs filled with NdFeB particles
NASA Astrophysics Data System (ADS)
Kalina, K. A.; Brummund, J.; Metsch, P.; Kästner, M.; Borin, D. Yu; Linke, J. M.; Odenbach, S.
2017-10-01
Herein, we investigate the structure-property relationships of soft magnetorheological elastomers (MREs) filled with remanently magnetizable particles. The study is motivated from experimental results which indicate a large difference between the magnetization loops of soft MREs filled with NdFeB particles and the loops of such particles embedded in a comparatively stiff matrix, e.g. an epoxy resin. We present a microscale model for MREs based on a general continuum formulation of the magnetomechanical boundary value problem which is valid for finite strains. In particular, we develop an energetically consistent constitutive model for the hysteretic magnetization behavior of the magnetically hard particles. The microstructure is discretized and the problem is solved numerically in terms of a coupled nonlinear finite element approach. Since the local magnetic and mechanical fields are resolved explicitly inside the heterogeneous microstructure of the MRE, our model also accounts for interactions of particles close to each other. In order to connect the microscopic fields to effective macroscopic quantities of the MRE, a suitable computational homogenization scheme is used. Based on this modeling approach, it is demonstrated that the observable macroscopic behavior of the considered MREs results from the rotation of the embedded particles. Furthermore, the performed numerical simulations indicate that the reversion of the sample’s magnetization occurs due to a combination of particle rotations and internal domain conversion processes. All of our simulation results obtained for such materials are in a good qualitative agreement with the experiments.
All-Atom Continuous Constant pH Molecular Dynamics With Particle Mesh Ewald and Titratable Water.
Huang, Yandong; Chen, Wei; Wallace, Jason A; Shen, Jana
2016-11-08
Development of a pH stat to properly control solution pH in biomolecular simulations has been a long-standing goal in the community. Toward this goal recent years have witnessed the emergence of the so-called constant pH molecular dynamics methods. However, the accuracy and generality of these methods have been hampered by the use of implicit-solvent models or truncation-based electrostatic schemes. Here we report the implementation of the particle mesh Ewald (PME) scheme into the all-atom continuous constant pH molecular dynamics (CpHMD) method, enabling CpHMD to be performed with a standard MD engine at a fractional added computational cost. We demonstrate the performance using pH replica-exchange CpHMD simulations with titratable water for a stringent test set of proteins, HP36, BBL, HEWL, and SNase. With the sampling time of 10 ns per replica, most pK a 's are converged, yielding the average absolute and root-mean-square deviations of 0.61 and 0.77, respectively, from experiment. Linear regression of the calculated vs experimental pK a shifts gives a correlation coefficient of 0.79, a slope of 1, and an intercept near 0. Analysis reveals inadequate sampling of structure relaxation accompanying a protonation-state switch as a major source of the remaining errors, which are reduced as simulation prolongs. These data suggest PME-based CpHMD can be used as a general tool for pH-controlled simulations of macromolecular systems in various environments, enabling atomic insights into pH-dependent phenomena involving not only soluble proteins but also transmembrane proteins, nucleic acids, surfactants, and polysaccharides.
NASA Astrophysics Data System (ADS)
Capecelatro, Jesse
2018-03-01
It has long been suggested that a purely Lagrangian solution to global-scale atmospheric/oceanic flows can potentially outperform tradition Eulerian schemes. Meanwhile, a demonstration of a scalable and practical framework remains elusive. Motivated by recent progress in particle-based methods when applied to convection dominated flows, this work presents a fully Lagrangian method for solving the inviscid shallow water equations on a rotating sphere in a smooth particle hydrodynamics framework. To avoid singularities at the poles, the governing equations are solved in Cartesian coordinates, augmented with a Lagrange multiplier to ensure that fluid particles are constrained to the surface of the sphere. An underlying grid in spherical coordinates is used to facilitate efficient neighbor detection and parallelization. The method is applied to a suite of canonical test cases, and conservation, accuracy, and parallel performance are assessed.
Simultaneous Control of Multispecies Particle Transport and Segregation in Driven Lattices
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Aritra K.; Liebchen, Benno; Schmelcher, Peter
2018-05-01
We provide a generic scheme to separate the particles of a mixture by their physical properties like mass, friction, or size. The scheme employs a periodically shaken two-dimensional dissipative lattice and hinges on a simultaneous transport of particles in species-specific directions. This selective transport is achieved by controlling the late-time nonlinear particle dynamics, via the attractors embedded in the phase space and their bifurcations. To illustrate the spectrum of possible applications of the scheme, we exemplarily demonstrate the separation of polydisperse colloids and mixtures of cold thermal alkali atoms in optical lattices.
An LES-PBE-PDF approach for modeling particle formation in turbulent reacting flows
NASA Astrophysics Data System (ADS)
Sewerin, Fabian; Rigopoulos, Stelios
2017-10-01
Many chemical and environmental processes involve the formation of a polydispersed particulate phase in a turbulent carrier flow. Frequently, the immersed particles are characterized by an intrinsic property such as the particle size, and the distribution of this property across a sample population is taken as an indicator for the quality of the particulate product or its environmental impact. In the present article, we propose a comprehensive model and an efficient numerical solution scheme for predicting the evolution of the property distribution associated with a polydispersed particulate phase forming in a turbulent reacting flow. Here, the particulate phase is described in terms of the particle number density whose evolution in both physical and particle property space is governed by the population balance equation (PBE). Based on the concept of large eddy simulation (LES), we augment the existing LES-transported probability density function (PDF) approach for fluid phase scalars by the particle number density and obtain a modeled evolution equation for the filtered PDF associated with the instantaneous fluid composition and particle property distribution. This LES-PBE-PDF approach allows us to predict the LES-filtered fluid composition and particle property distribution at each spatial location and point in time without any restriction on the chemical or particle formation kinetics. In view of a numerical solution, we apply the method of Eulerian stochastic fields, invoking an explicit adaptive grid technique in order to discretize the stochastic field equation for the number density in particle property space. In this way, sharp moving features of the particle property distribution can be accurately resolved at a significantly reduced computational cost. As a test case, we consider the condensation of an aerosol in a developed turbulent mixing layer. Our investigation not only demonstrates the predictive capabilities of the LES-PBE-PDF model but also indicates the computational efficiency of the numerical solution scheme.
Numerical Viscosity and the Survival of Gas Giant Protoplanets in Disk Simulations
NASA Astrophysics Data System (ADS)
Pickett, Megan K.; Durisen, Richard H.
2007-01-01
We present three-dimensional hydrodynamic simulations of a gravitationally unstable protoplanetary disk model under the condition of local isothermality. Ordinarily, local isothermality precludes the need for an artificial viscosity (AV) scheme to mediate shocks. Without AV, the disk evolves violently, shredding into dense (although short-lived) clumps. When we introduce our AV treatment in the momentum equation, but without heating due to irreversible compression, our grid-based simulations begin to resemble smoothed particle hydrodynamics (SPH) calculations, where clumps are more likely to survive many orbits. In fact, the standard SPH viscosity appears comparable in strength to the AV that leads to clump longevity in our code. This sensitivity to one numerical parameter suggests extreme caution in interpreting simulations by any code in which long-lived gaseous protoplanetary bodies appear.
Quantum cryptography using single-particle entanglement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jae-Weon; Lee, Eok Kyun; Chung, Yong Wook
2003-07-01
A quantum cryptography scheme based on entanglement between a single-particle state and a vacuum state is proposed. The scheme utilizes linear optics devices to detect the superposition of the vacuum and single-particle states. Existence of an eavesdropper can be detected by using a variant of Bell's inequality.
NASA Astrophysics Data System (ADS)
Barnsley, Lester C.; Carugo, Dario; Aron, Miles; Stride, Eleanor
2017-03-01
The aim of this study was to characterize the behaviour of superparamagnetic particles in magnetic drug targeting (MDT) schemes. A 3-dimensional mathematical model was developed, based on the analytical derivation of the trajectory of a magnetized particle suspended inside a fluid channel carrying laminar flow and in the vicinity of an external source of magnetic force. Semi-analytical expressions to quantify the proportion of captured particles, and their relative accumulation (concentration) as a function of distance along the wall of the channel were also derived. These were expressed in terms of a non-dimensional ratio of the relevant physical and physiological parameters corresponding to a given MDT protocol. The ability of the analytical model to assess magnetic targeting schemes was tested against numerical simulations of particle trajectories. The semi-analytical expressions were found to provide good first-order approximations for the performance of MDT systems in which the magnetic force is relatively constant over a large spatial range. The numerical model was then used to test the suitability of a range of different designs of permanent magnet assemblies for MDT. The results indicated that magnetic arrays that emit a strong magnetic force that varies rapidly over a confined spatial range are the most suitable for concentrating magnetic particles in a localized region. By comparison, commonly used magnet geometries such as button magnets and linear Halbach arrays result in distributions of accumulated particles that are less efficient for delivery. The trajectories predicted by the numerical model were verified experimentally by acoustically focusing magnetic microbeads flowing in a glass capillary channel, and optically tracking their path past a high field gradient Halbach array.
NASA Astrophysics Data System (ADS)
Lee, Y.; Combi, M. R.; Tenishev, V.; Bougher, S. W.; Johnson, R. E.; Tully, C.
2016-12-01
The recent observations of the Martian geomorphology suggest that water has played a critical role in forming the present status of the Martian atmosphere and environment. The inventory of water has been depleted throughout the planet's geologic time via various mechanisms from the surface to the uppermost atmosphere where the Sun-Mars interaction occurs. During the current epoch, dissociative recombination of O2+ is suggested as the main nonthermal mechanism that regulates the escape of atomic O, forming the hot O corona. A nascent hot O atom produced deep in the thermosphere undergoes collisions with the background thermal species, where the particle can lose energy and become thermalized before it reaches the collisionless regime and escape. The major hot O collisions with the background species that contribute to the thermalization of hot O are Ohot-Ocold, Ohot-CO2,cold, Ohot-COcold, and Ohot-N2,cold. In order to describe these collisions, there have been different collisions schemes used by the previous models. One of the most realistic descriptions involves using angular differential cross sections, and the simplest approach is using isotropic collision cross sections. Here, we present a comparison between the 3D model results using two different collision schemes to find equivalent hard sphere collision cross sections that satisfy the effects from using forward scattering cross sections. We adapted the newly calculated angular differential cross sections to the major hot O collisions. The hot O corona is simulated by coupling our Mars application of the 3D Adaptive Mesh Particle Simulator (M-AMPS) [Tenishev et al., 2008, 2013] and the Mars Global Ionosphere-Thermosphere Model (M-GITM) [Bougher et al., 2015].
A Novel Quantum Blind Signature Scheme with Four-Particle Cluster States
NASA Astrophysics Data System (ADS)
Fan, Ling
2016-03-01
In an arbitrated quantum signature scheme, the signer signs the message and the receiver verifies the signature's validity with the assistance of the arbitrator. We present an arbitrated quantum blind signature scheme by measuring four-particle cluster states and coding. By using the special relationship of four-particle cluster states, we cannot only support the security of quantum signature, but also guarantee the anonymity of the message owner. It has a wide application to E-payment system, E-government, E-business, and etc.
Performance of ICTP's RegCM4 in Simulating the Rainfall Characteristics over the CORDEX-SEA Domain
NASA Astrophysics Data System (ADS)
Neng Liew, Ju; Tangang, Fredolin; Tieh Ngai, Sheau; Chung, Jing Xiang; Narisma, Gemma; Cruz, Faye Abigail; Phan Tan, Van; Thanh, Ngo-Duc; Santisirisomboon, Jerasron; Milindalekha, Jaruthat; Singhruck, Patama; Gunawan, Dodo; Satyaningsih, Ratna; Aldrian, Edvin
2015-04-01
The performance of the RegCM4 in simulating rainfall variations over the Southeast Asia regions was examined. Different combinations of six deep convective parameterization schemes, namely i) Grell scheme with Arakawa-Schubert closure assumption, ii) Grell scheme with Fritch-Chappel closure assumption, iii) Emanuel MIT scheme, iv) mixed scheme with Emanuel MIT scheme over the Ocean and the Grell scheme over the land, v) mixed scheme with Grell scheme over the land and Emanuel MIT scheme over the ocean and (vi) Kuo scheme, and three ocean flux treatments were tested. In order to account for uncertainties among the observation products, four different gridded rainfall products were used for comparison. The simulated climate is generally drier over the equatorial regions and slightly wetter over the mainland Indo-China compare to the observation. However, simulation with MIT cumulus scheme used over the land area consistently produces large amplitude of positive rainfall biases, although it simulates more realistic annual rainfall variations. The simulations are found less sensitive to treatment of ocean fluxes. Although the simulations produced the rainfall climatology well, all of them simulated much stronger interannual variability compare to that of the observed. Nevertheless, the time evolution of the inter-annual variations was well reproduced particularly over the eastern part of maritime continent. Over the mainland Southeast Asia (SEA), unrealistic rainfall anomalies processes were simulated. The lacking of summer season air-sea interaction results in strong oceanic forcings over the regions, leading to positive rainfall anomalies during years with warm ocean temperature anomalies. This incurs much stronger atmospheric forcings on the land surface processes compare to that of the observed. A score ranking system was designed to rank the simulations according to their performance in reproducing different aspects of rainfall characteristics. The result suggests that the simulation with Emanuel MIT convective scheme and BATs land surface scheme produces better collective performance compare to the rest of the simulations.
Overview of the relevant CFD work at Thiokol Corporation
NASA Technical Reports Server (NTRS)
Chwalowski, Pawel; Loh, Hai-Tien
1992-01-01
An in-house developed proprietary advanced computational fluid dynamics code called SHARP (Trademark) is a primary tool for many flow simulations and design analyses. The SHARP code is a time dependent, two dimensional (2-D) axisymmetric numerical solution technique for the compressible Navier-Stokes equations. The solution technique in SHARP uses a vectorizable implicit, second order accurate in time and space, finite volume scheme based on an upwind flux-difference splitting of a Roe-type approximated Riemann solver, Van Leer's flux vector splitting, and a fourth order artificial dissipation scheme with a preconditioning to accelerate the flow solution. Turbulence is simulated by an algebraic model, and ultimately the kappa-epsilon model. Some other capabilities of the code are 2-D two-phase Lagrangian particle tracking and cell blockages. Extensive development and testing has been conducted on the 3-D version of the code with flow, combustion, and turbulence interactions. The emphasis here is on the specific applications of SHARP in Solid Rocket Motor design. Information is given in viewgraph form.
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
Fierce, Laura; McGraw, Robert L.
2017-07-26
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fierce, Laura; McGraw, Robert L.
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
NASA Astrophysics Data System (ADS)
Kokkalis, P.; Papayannis, A.; Amiridis, V.; Mamouri, R. E.; Veselovskii, I.; Kolgotin, A.; Tsaknakis, G.; Kristiansen, N. I.; Stohl, A.; Mona, L.
2013-09-01
Vertical profiles of the optical (extinction and backscatter coefficients, lidar ratio and Ångström exponent), microphysical (mean effective radius, mean refractive index, mean number concentration) and geometrical properties as well as the mass concentration of volcanic particles from the Eyjafjallajökull eruption were retrieved at selected heights over Athens, Greece, using multi-wavelength Raman lidar measurements performed during the period 21-24 April 2010. Aerosol Robotic Network (AERONET) particulate columnar measurements along with inversion schemes were initialized together with lidar observations to deliver the aforementioned products. The well-known FLEXPART (FLEXible PARTicle dispersion model) model used for volcanic dispersion simulations is initiated as well in order to estimate the horizontal and vertical distribution of volcanic particles. Compared with the lidar measurements within the planetary boundary layer over Athens, FLEXPART proved to be a useful tool for determining the state of mixing of ash with other, locally emitted aerosol types. The major findings presented in our work concern the identification of volcanic particles layers in the form of filaments after 7-day transport from the volcanic source (approximately 4000 km away from our site) from the surface and up to 10 km according to the lidar measurements. Mean hourly averaged lidar signals indicated that the layer thickness of volcanic particles ranged between 1.5 and 2.2 km. The corresponding aerosol optical depth was found to vary from 0.01 to 0.18 at 355 nm and from 0.02 up to 0.17 at 532 nm. Furthermore, the corresponding lidar ratios (S) ranged between 60 and 80 sr at 355 nm and 44 and 88 sr at 532 nm. The mean effective radius of the volcanic particles estimated by applying inversion scheme to the lidar data found to vary within the range 0.13-0.38 μm and the refractive index ranged from 1.39+0.009i to 1.48+0.006i. This high variability is most probably attributed to the mixing of aged volcanic particles with other aerosol types of local origin. Finally, the LIRIC (LIdar/Radiometer Inversion Code) lidar/sunphotometric combined inversion algorithm has been applied in order to retrieve particle concentrations. These have been compared with FLEXPART simulations of the vertical distribution of ash showing good agreement concerning not only the geometrical properties of the volcanic particles layers but also the particles mass concentration.
Long-Ranged Oppositely Charged Interactions for Designing New Types of Colloidal Clusters
NASA Astrophysics Data System (ADS)
Demirörs, Ahmet Faik; Stiefelhagen, Johan C. P.; Vissers, Teun; Smallenburg, Frank; Dijkstra, Marjolein; Imhof, Arnout; van Blaaderen, Alfons
2015-04-01
Getting control over the valency of colloids is not trivial and has been a long-desired goal for the colloidal domain. Typically, tuning the preferred number of neighbors for colloidal particles requires directional bonding, as in the case of patchy particles, which is difficult to realize experimentally. Here, we demonstrate a general method for creating the colloidal analogs of molecules and other new regular colloidal clusters without using patchiness or complex bonding schemes (e.g., DNA coating) by using a combination of long-ranged attractive and repulsive interactions between oppositely charged particles that also enable regular clusters of particles not all in close contact. We show that, due to the interplay between their attractions and repulsions, oppositely charged particles dispersed in an intermediate dielectric constant (4 <ɛ <10 ) provide a viable approach for the formation of binary colloidal clusters. Tuning the size ratio and interactions of the particles enables control of the type and shape of the resulting regular colloidal clusters. Finally, we present an example of clusters made up of negatively charged large and positively charged small satellite particles, for which the electrostatic properties and interactions can be changed with an electric field. It appears that for sufficiently strong fields the satellite particles can move over the surface of the host particles and polarize the clusters. For even stronger fields, the satellite particles can be completely pulled off, reversing the net charge on the cluster. With computer simulations, we investigate how charged particles distribute on an oppositely charged sphere to minimize their energy and compare the results with the solutions to the well-known Thomson problem. We also use the simulations to explore the dependence of such clusters on Debye screening length κ-1 and the ratio of charges on the particles, showing good agreement with experimental observations.
Soria, José; Gauthier, Daniel; Flamant, Gilles; Rodriguez, Rosa; Mazza, Germán
2015-09-01
Municipal Solid Waste Incineration (MSWI) in fluidized bed is a very interesting technology mainly due to high combustion efficiency, great flexibility for treating several types of waste fuels and reduction in pollutants emitted with the flue gas. However, there is a great concern with respect to the fate of heavy metals (HM) contained in MSW and their environmental impact. In this study, a coupled two-scale CFD model was developed for MSWI in a bubbling fluidized bed. It presents an original scheme that combines a single particle model and a global fluidized bed model in order to represent the HM vaporization during MSW combustion. Two of the most representative HM (Cd and Pb) with bed temperatures ranging between 923 and 1073K have been considered. This new approach uses ANSYS FLUENT 14.0 as the modelling platform for the simulations along with a complete set of self-developed user-defined functions (UDFs). The simulation results are compared to the experimental data obtained previously by the research group in a lab-scale fluid bed incinerator. The comparison indicates that the proposed CFD model predicts well the evolution of the HM release for the bed temperatures analyzed. It shows that both bed temperature and bed dynamics have influence on the HM vaporization rate. It can be concluded that CFD is a rigorous tool that provides valuable information about HM vaporization and that the original two-scale simulation scheme adopted allows to better represent the actual particle behavior in a fluid bed incinerator. Copyright © 2015 Elsevier Ltd. All rights reserved.
Study on the tumor-induced angiogenesis using mathematical models.
Suzuki, Takashi; Minerva, Dhisa; Nishiyama, Koichi; Koshikawa, Naohiko; Chaplain, Mark Andrew Joseph
2018-01-01
We studied angiogenesis using mathematical models describing the dynamics of tip cells. We reviewed the basic ideas of angiogenesis models and its numerical simulation technique to produce realistic computer graphics images of sprouting angiogenesis. We examined the classical model of Anderson-Chaplain using fundamental concepts of mass transport and chemical reaction with ECM degradation included. We then constructed two types of numerical schemes, model-faithful and model-driven ones, where new techniques of numerical simulation are introduced, such as transient probability, particle velocity, and Boolean variables. © 2017 The Authors. Cancer Science published by John Wiley & Sons Australia, Ltd on behalf of Japanese Cancer Association.
Spin-1 models in the ultrastrong-coupling regime of circuit QED
NASA Astrophysics Data System (ADS)
Albarrán-Arriagada, F.; Lamata, L.; Solano, E.; Romero, G.; Retamal, J. C.
2018-02-01
We propose a superconducting circuit platform for simulating spin-1 models. To this purpose we consider a chain of N ultrastrongly coupled qubit-resonator systems interacting through a grounded superconducting quantum interference device (SQUID). The anharmonic spectrum of the qubit-resonator system and the selection rules imposed by the global parity symmetry allow us to activate well controlled two-body quantum gates via ac pulses applied to the SQUID. We show that our proposal has the same simulation time for any number of spin-1 interacting particles. This scheme may be implemented within the state-of-the-art circuit QED in the ultrastrong coupling regime.
A Numerical Study of Cirrus Clouds. Part I: Model Description.
NASA Astrophysics Data System (ADS)
Liu, Hui-Chun; Wang, Pao K.; Schlesinger, Robert E.
2003-04-01
This article, the first of a two-part series, presents a detailed description of a two-dimensional numerical cloud model directed toward elucidating the physical processes governing the evolution of cirrus clouds. The two primary scientific purposes of this work are (a) to determine the evolution and maintenance mechanisms of cirrus clouds and try to explain why some cirrus can persist for a long time; and (b) to investigate the influence of certain physical factors such as radiation, ice crystal habit, latent heat, ventilation effects, and aggregation mechanisms on the evolution of cirrus. The second part will discuss sets of model experiments that were run to address objectives (a) and (b), respectively.As set forth in this paper, the aforementioned two-dimensional numerical model, which comprises the research tool for this study, is organized into three modules that embody dynamics, microphysics, and radiation. The dynamic module develops a set of equations to describe shallow moist convection, also parameterizing turbulence by using a 1.5-order closure scheme. The microphysical module uses a double-moment scheme to simulate the evolution of the size distribution of ice particles. Heterogeneous and homogeneous nucleation of haze particles are included, along with other ice crystal processes such as diffusional growth, sedimentation, and aggregation. The radiation module uses a two-stream radiative transfer scheme to determine the radiative fluxes and heating rates, while the cloud optical properties are determined by the modified anomalous diffraction theory (MADT) for ice particles. One of the main advantages of this cirrus model is its explicit formulation of the microphysical and radiative properties as functions of ice crystal habit.
NASA Technical Reports Server (NTRS)
Betancourt, R. Morales; Lee, D.; Oreopoulos, L.; Sud, Y. C.; Barahona, D.; Nenes, A.
2012-01-01
The salient features of mixed-phase and ice clouds in a GCM cloud scheme are examined using the ice formation parameterizations of Liu and Penner (LP) and Barahona and Nenes (BN). The performance of LP and BN ice nucleation parameterizations were assessed in the GEOS-5 AGCM using the McRAS-AC cloud microphysics framework in single column mode. Four dimensional assimilated data from the intensive observation period of ARM TWP-ICE campaign was used to drive the fluxes and lateral forcing. Simulation experiments where established to test the impact of each parameterization in the resulting cloud fields. Three commonly used IN spectra were utilized in the BN parameterization to described the availability of IN for heterogeneous ice nucleation. The results show large similarities in the cirrus cloud regime between all the schemes tested, in which ice crystal concentrations were within a factor of 10 regardless of the parameterization used. In mixed-phase clouds there are some persistent differences in cloud particle number concentration and size, as well as in cloud fraction, ice water mixing ratio, and ice water path. Contact freezing in the simulated mixed-phase clouds contributed to transfer liquid to ice efficiently, so that on average, the clouds were fully glaciated at T approximately 260K, irrespective of the ice nucleation parameterization used. Comparison of simulated ice water path to available satellite derived observations were also performed, finding that all the schemes tested with the BN parameterization predicted 20 average values of IWP within plus or minus 15% of the observations.
NASA Astrophysics Data System (ADS)
Hoose, C.; Lohmann, U.; Stier, P.; Verheggen, B.; Weingartner, E.
2008-04-01
The global aerosol-climate model ECHAM5-HAM has been extended by an explicit treatment of cloud-borne particles. Two additional modes for in-droplet and in-crystal particles are introduced, which are coupled to the number of cloud droplet and ice crystal concentrations simulated by the ECHAM5 double-moment cloud microphysics scheme. Transfer, production, and removal of cloud-borne aerosol number and mass by cloud droplet activation, collision scavenging, aqueous-phase sulfate production, freezing, melting, evaporation, sublimation, and precipitation formation are taken into account. The model performance is demonstrated and validated with observations of the evolution of total and interstitial aerosol concentrations and size distributions during three different mixed-phase cloud events at the alpine high-altitude research station Jungfraujoch (Switzerland). Although the single-column simulations cannot be compared one-to-one with the observations, the governing processes in the evolution of the cloud and aerosol parameters are captured qualitatively well. High scavenged fractions are found during the presence of liquid water, while the release of particles during the Bergeron-Findeisen process results in low scavenged fractions after cloud glaciation. The observed coexistence of liquid and ice, which might be related to cloud heterogeneity at subgrid scales, can only be simulated in the model when assuming nonequilibrium conditions.
NASA Astrophysics Data System (ADS)
Tanikawa, Ataru; Yoshikawa, Kohji; Nitadori, Keigo; Okamoto, Takashi
2013-02-01
We have developed a numerical software library for collisionless N-body simulations named "Phantom-GRAPE" which highly accelerates force calculations among particles by use of a new SIMD instruction set extension to the x86 architecture, Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). In our library, not only the Newton's forces, but also central forces with an arbitrary shape f(r), which has a finite cutoff radius rcut (i.e. f(r)=0 at r>rcut), can be quickly computed. In computing such central forces with an arbitrary force shape f(r), we refer to a pre-calculated look-up table. We also present a new scheme to create the look-up table whose binning is optimal to keep good accuracy in computing forces and whose size is small enough to avoid cache misses. Using an Intel Core i7-2600 processor, we measure the performance of our library for both of the Newton's forces and the arbitrarily shaped central forces. In the case of Newton's forces, we achieve 2×109 interactions per second with one processor core (or 75 GFLOPS if we count 38 operations per interaction), which is 20 times higher than the performance of an implementation without any explicit use of SIMD instructions, and 2 times than that with the SSE instructions. With four processor cores, we obtain the performance of 8×109 interactions per second (or 300 GFLOPS). In the case of the arbitrarily shaped central forces, we can calculate 1×109 and 4×109 interactions per second with one and four processor cores, respectively. The performance with one processor core is 6 times and 2 times higher than those of the implementations without any use of SIMD instructions and with the SSE instructions. These performances depend only weakly on the number of particles, irrespective of the force shape. It is good contrast with the fact that the performance of force calculations accelerated by graphics processing units (GPUs) depends strongly on the number of particles. Substantially weak dependence of the performance on the number of particles is suitable to collisionless N-body simulations, since these simulations are usually performed with sophisticated N-body solvers such as Tree- and TreePM-methods combined with an individual timestep scheme. We conclude that collisionless N-body simulations accelerated with our library have significant advantage over those accelerated by GPUs, especially on massively parallel environments.
Decoupling the Role of Particle Inertia and Gravity on Particle Dispersion
NASA Technical Reports Server (NTRS)
Squires, Kyle D.
2002-01-01
Particle dispersion and the influence that particle momentum exchange has on the properties of a turbulent carrier flow in micro-gravity environments challenge present understanding and predictive schemes. The objective of this effort has been to develop and assess high-fidelity simulation tools for predicting particle transport within micro-gravity environments suspended in turbulent flows. The computational technique is based on Direct Numerical Simulation (DNS) of the incompressible Navier-Stokes equations. The particular focus of the present work is on the class of dilute flows in which particle volume fractions and inter-particle collisions are negligible. Particle motion is assumed to be governed by drag with particle relaxation times ranging from the Kolmogorov scale to the Eulerian timescale of the turbulence and particle mass loadings up to one. The velocity field was made statistically stationary by forcing the low wavenumbers of the flow. The calculations were performed using 96(exp 3) collocation points and the Taylor-scale Reynolds number for the stationary flow was 62. The effect of particles on the turbulence was included in the Navier-Stokes equations using the point-force approximation in which 96(exp 3) particles were used in the calculations. DNS results show that particles increasingly dissipate fluid kinetic energy with increased loading, with the reduction in kinetic energy being relatively independent of the particle relaxation time. Viscous dissipation in the fluid decreases with increased loading and is larger for particles with smaller relaxation times. Fluid energy spectra show that there is a non-uniform distortion of the turbulence with a relative increase in small-scale energy. The non-uniform distortion significantly affects the transport of the dissipation rate, with the production and destruction of dissipation exhibiting completely different behaviors. The spectrum of the fluid-particle energy exchange rate shows that the fluid drags particles at low wavenumbers while the converse is true at high wavenumbers for small particles. A spectral analysis shows that the increase of the high wavenumber portion of the fluid energy spectrum can be attributed to transfer of the fluid-particle covariance by the fluid turbulence. This in turn explains the relative increase of small-scale energy caused by small particles observed in the present simulations as well as those of others.
Brilliant GeV gamma-ray flash from inverse Compton scattering in the QED regime
NASA Astrophysics Data System (ADS)
Gong, Z.; Hu, R. H.; Lu, H. Y.; Yu, J. Q.; Wang, D. H.; Fu, E. G.; Chen, C. E.; He, X. T.; Yan, X. Q.
2018-04-01
An all-optical scheme is proposed for studying laser plasma based incoherent photon emission from inverse Compton scattering in the quantum electrodynamic regime. A theoretical model is presented to explain the coupling effects among radiation reaction trapping, the self-generated magnetic field and the spiral attractor in phase space, which guarantees the transfer of energy and angular momentum from electromagnetic fields to particles. Taking advantage of a prospective ˜ 1023 W cm-2 laser facility, 3D particle-in-cell simulations show a gamma-ray flash with unprecedented multi-petawatt power and brightness of 1.7 × 1023 photons s-1 mm-2 mrad-2/0.1% bandwidth (at 1 GeV). These results bode well for new research directions in particle physics and laboratory astrophysics exploring laser plasma interactions.
Ishizuka, Masahide; Mikami, Masao; Tanaka, Taichu Y; Igarashi, Yasuhito; Kita, Kazuyuki; Yamada, Yutaka; Yoshida, Naohiro; Toyoda, Sakae; Satou, Yukihiko; Kinase, Takeshi; Ninomiya, Kazuhiko; Shinohara, Atsushi
2017-01-01
A size-resolved, one-dimensional resuspension scheme for soil particles from the ground surface is proposed to evaluate the concentration of radioactivity in the atmosphere due to the secondary emission of radioactive material. The particle size distributions of radioactive particles at a sampling point were measured and compared with the results evaluated by the scheme using four different soil textures: sand, loamy sand, sandy loam, and silty loam. For sandy loam and silty loam, the results were in good agreement with the size-resolved atmospheric radioactivity concentrations observed at a school ground in Tsushima District, Namie Town, Fukushima, which was heavily contaminated after the Fukushima Dai-ichi Nuclear Power Plant accident in March 2011. Though various assumptions were incorporated into both the scheme and evaluation conditions, this study shows that the proposed scheme can be applied to evaluate secondary emissions caused by aeolian resuspension of radioactive materials associated with mineral dust particles from the ground surface. The results underscore the importance of taking soil texture into account when evaluating the concentrations of resuspended, size-resolved atmospheric radioactivity. Copyright © 2016 Elsevier Ltd. All rights reserved.
AMR Code Simulations of Turbulent Combustion in Confined and Unconfined SDF Explosions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhl, A L; Bell, J B; Beckner, V
2009-05-29
A heterogeneous continuum model is proposed to describe the dispersion and combustion of an aluminum particle cloud in an explosion. It combines the gas-dynamic conservation laws for the gas phase with a continuum model for the dispersed phase, as formulated by Nigmatulin. Inter-phase mass, momentum and energy exchange are prescribed by phenomenological models. It incorporates a combustion model based on the mass conservation laws for fuel, air and products; source/sink terms are treated in the fast-chemistry limit appropriate for such gas dynamic fields, along with a model for mass transfer from the particle phase to the gas. The model takesmore » into account both the afterburning of the detonation products of the booster with air, and the combustion of the Al particles with air. The model equations were integrated by high-order Godunov schemes for both the gas and particle phases. Numerical simulations of the explosion fields from 1.5-g Shock-Dispersed-Fuel (SDF) charge in a 6.6 liter calorimeter were used to validate the combustion model. Then the model was applied to 10-kg Al-SDF explosions in a vented two-room structure and in an unconfined height-of-burst explosion. Computed pressure histories are in reasonable (but not perfect) agreement with measured waveforms. Differences are caused by physical-chemical kinetic effects of particle combustion which induce ignition delays in the initial reactive blast wave and quenching of reactions at late times. Current simulations give initial insights into such modeling issues.« less
A generalized form of the Bernoulli Trial collision scheme in DSMC: Derivation and evaluation
NASA Astrophysics Data System (ADS)
Roohi, Ehsan; Stefanov, Stefan; Shoja-Sani, Ahmad; Ejraei, Hossein
2018-02-01
The impetus of this research is to present a generalized Bernoulli Trial collision scheme in the context of the direct simulation Monte Carlo (DSMC) method. Previously, a subsequent of several collision schemes have been put forward, which were mathematically based on the Kac stochastic model. These include Bernoulli Trial (BT), Ballot Box (BB), Simplified Bernoulli Trial (SBT) and Intelligent Simplified Bernoulli Trial (ISBT) schemes. The number of considered pairs for a possible collision in the above-mentioned schemes varies between N (l) (N (l) - 1) / 2 in BT, 1 in BB, and (N (l) - 1) in SBT or ISBT, where N (l) is the instantaneous number of particles in the lth cell. Here, we derive a generalized form of the Bernoulli Trial collision scheme (GBT) where the number of selected pairs is any desired value smaller than (N (l) - 1), i.e., Nsel < (N (l) - 1), keeping the same the collision frequency and accuracy of the solution as the original SBT and BT models. We derive two distinct formulas for the GBT scheme, where both formula recover BB and SBT limits if Nsel is set as 1 and N (l) - 1, respectively, and provide accurate solutions for a wide set of test cases. The present generalization further improves the computational efficiency of the BT-based collision models compared to the standard no time counter (NTC) and nearest neighbor (NN) collision models.
Modeling and Detection of Ice Particle Accretion in Aircraft Engine Compression Systems
NASA Technical Reports Server (NTRS)
May, Ryan D.; Simon, Donald L.; Guo, Ten-Huei
2012-01-01
The accretion of ice particles in the core of commercial aircraft engines has been an ongoing aviation safety challenge. While no accidents have resulted from this phenomenon to date, numerous engine power loss events ranging from uneventful recoveries to forced landings have been recorded. As a first step to enabling mitigation strategies during ice accretion, a detection scheme must be developed that is capable of being implemented on board modern engines. In this paper, a simple detection scheme is developed and tested using a realistic engine simulation with approximate ice accretion models based on data from a compressor design tool. These accretion models are implemented as modified Low Pressure Compressor maps and have the capability to shift engine performance based on a specified level of ice blockage. Based on results from this model, it is possible to detect the accretion of ice in the engine core by observing shifts in the typical sensed engine outputs. Results are presented in which, for a 0.1 percent false positive rate, a true positive detection rate of 98 percent is achieved.
NASA Astrophysics Data System (ADS)
Long, LiuRong; Li, HongWei; Zhou, Ping; Fan, Chao; Yin, CaiLiu
2011-03-01
We present a scheme for multiparty-controlled teleportation of an arbitrary high-dimensional GHZ-class state with a d-dimensional ( N+2)-particle GHZ state following some ideas from the teleportation (Chinese Physics B, 2007, 16: 2867). This scheme has the advantage of transmitting much fewer particles for controlled teleportation of an arbitrary multiparticle GHZ-class state. Moreover, we discuss the application of this scheme by using a nonmaximally entangled state as its quantum channel.
A Novel Quantum Blind Signature Scheme with Four-particle GHZ States
NASA Astrophysics Data System (ADS)
Fan, Ling; Zhang, Ke-Jia; Qin, Su-Juan; Guo, Fen-Zhuo
2016-02-01
In an arbitrated quantum signature scheme, the signer signs the message and the receiver verifies the signature's validity with the assistance of the arbitrator. We present an arbitrated quantum blind signature scheme by using four-particle entangled Greenberger-Horne-Zeilinger (GHZ) states. By using the special relationship of four-particle GHZ states, we cannot only support the security of quantum signature, but also guarantee the anonymity of the message owner. It has a wide application to E-payment system, E-government, E-business, and etc.
NASA Astrophysics Data System (ADS)
Tsai, T. C.; Chen, J. P.; Dearden, C.
2014-12-01
The wide variety of ice crystal shapes and growth habits makes it a complicated issue in cloud models. This study developed the bulk ice adaptive habit parameterization based on the theoretical approach of Chen and Lamb (1994) and introduced a 6-class hydrometeors double-moment (mass and number) bulk microphysics scheme with gamma-type size distribution function. Both the proposed schemes have been implemented into the Weather Research and Forecasting model (WRF) model forming a new multi-moment bulk microphysics scheme. Two new moments of ice crystal shape and volume are included for tracking pristine ice's adaptive habit and apparent density. A closure technique is developed to solve the time evolution of the bulk moments. For the verification of the bulk ice habit parameterization, some parcel-type (zero-dimension) calculations were conducted and compared with binned numerical calculations. The results showed that: a flexible size spectrum is important in numerical accuracy, the ice shape can significantly enhance the diffusional growth, and it is important to consider the memory of growth habit (adaptive growth) under varying environmental conditions. Also, the derived results with the 3-moment method were much closer to the binned calculations. A field campaign of DIAMET was selected to simulate in the WRF model for real-case studies. The simulations were performed with the traditional spherical ice and the new adaptive shape schemes to evaluate the effect of crystal habits. Some main features of narrow rain band, as well as the embedded precipitation cells, in the cold front case were well captured by the model. Furthermore, the simulations produced a good agreement in the microphysics against the aircraft observations in ice particle number concentration, ice crystal aspect ratio, and deposition heating rate especially within the temperature region of ice secondary multiplication production.
SIMULATIONS OF TRANSVERSE STACKING IN THE NSLS-II BOOSTER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fliller III, R.; Shaftan, T.
2011-03-28
The NSLS-II injection system consists of a 200 MeV linac and a 3 GeV booster. The linac needs to deliver 15 nC in 80 - 150 bunches to the booster every minute to achieve current stability goals in the storage ring. This is a very stringent requirement that has not been demonstrated at an operating light source. We have developed a scheme to transversely stack two bunch trains in the NSLS-II booster in order to alleviate the charge requirements on the linac. This scheme has been outlined previously. In this paper we show particle tracking simulations of the tracking scheme.more » We show simulations of the booster ramp with a stacked beam for a variety of lattice errors and injected beam parameters. In all cases the performance of the proposed stacking method is sufficient to reduce the required charge from the linac. For this reason the injection system of the NSLS-II booster is being designed to include this feature. The NSLS-II injection system consists of a 200 MeV linac and a 3 GeV booster. The injectors must provide 7.5nC in bunch trains 80-150 bunches long every minute for top off operation of the storage ring. Top off then requires that the linac deliver 15nC of charge once losses in the injector chain are taken into consideration. This is a very stringent requirement that has not been demonstrated at an operating light source. For this reason we have developed a method to transversely stack two bunch trains in the booster while maintaining the charge transport efficiency. This stacking scheme has been discussed previously. In this paper we show the simulations of the booster ramp with a single bunch train in the booster. Then we give a brief overview of the stacking scheme. Following, we show the results of stacking two bunch trains in the booster with varying beam emittances and train separations. The behavior of the beam through the ramp is examined showing that it is possible to stack two bunch trains in the booster.« less
Development of a particle method of characteristics (PMOC) for one-dimensional shock waves
NASA Astrophysics Data System (ADS)
Hwang, Y.-H.
2018-03-01
In the present study, a particle method of characteristics is put forward to simulate the evolution of one-dimensional shock waves in barotropic gaseous, closed-conduit, open-channel, and two-phase flows. All these flow phenomena can be described with the same set of governing equations. The proposed scheme is established based on the characteristic equations and formulated by assigning the computational particles to move along the characteristic curves. Both the right- and left-running characteristics are traced and represented by their associated computational particles. It inherits the computational merits from the conventional method of characteristics (MOC) and moving particle method, but without their individual deficiencies. In addition, special particles with dual states deduced to the enforcement of the Rankine-Hugoniot relation are deliberately imposed to emulate the shock structure. Numerical tests are carried out by solving some benchmark problems, and the computational results are compared with available analytical solutions. From the derivation procedure and obtained computational results, it is concluded that the proposed PMOC will be a useful tool to replicate one-dimensional shock waves.
Fish Passage though Hydropower Turbines: Simulating Blade Strike using the Discrete Element Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richmond, Marshall C.; Romero Gomez, Pedro DJ
mong the hazardous hydraulic conditions affecting anadromous and resident fish during their passage though turbine flows, two are believed to cause considerable injury and mortality: collision on moving blades and decompression. Several methods are currently available to evaluate these stressors in installed turbines, i.e. using live fish or autonomous sensor devices, and in reduced-scale physical models, i.e. registering collisions from plastic beads. However, a priori estimates with computational modeling approaches applied early in the process of turbine design can facilitate the development of fish-friendly turbines. In the present study, we evaluated the frequency of blade strike and nadir pressure environmentmore » by modeling potential fish trajectories with the Discrete Element Method (DEM) applied to fish-like composite particles. In the DEM approach, particles are subjected to realistic hydraulic conditions simulated with computational fluid dynamics (CFD), and particle-structure interactions—representing fish collisions with turbine blades—are explicitly recorded and accounted for in the calculation of particle trajectories. We conducted transient CFD simulations by setting the runner in motion and allowing for better turbulence resolution, a modeling improvement over the conventional practice of simulating the system in steady state which was also done here. While both schemes yielded comparable bulk hydraulic performance, transient conditions exhibited a visual improvement in describing flow variability. We released streamtraces (steady flow solution) and DEM particles (transient solution) at the same location from where sensor fish (SF) have been released in field studies of the modeled turbine unit. The streamtrace-based results showed a better agreement with SF data than the DEM-based nadir pressures did because the former accounted for the turbulent dispersion at the intake but the latter did not. However, the DEM-based strike frequency is more representative of blade-strike probability than the steady solution is, mainly because DEM particles accounted for the full fish length, thus resolving (instead of modeling) the collision event.« less
Hydrometeor Trajectories and Distributions in a Simulation of TC Rapid Intensification (RI)
NASA Astrophysics Data System (ADS)
Zhu, Z.; Zhu, P.
2010-12-01
It has long been recognized that the microphysics scheme used in a numerical simulation of tropical cyclones (TC) can greatly affect the precipitation distribution, intensity and thermodynamic structure of the simulated TC. This suggests that the mixing ratios, concentrations and size distributions of hydrometeor(snow, graupel,rain,cloud ice) are important factors in the evolution of TC . The transport of hydrometeor may have a strong influence on these factors through its interactions with the growth and the latent heat forcing of hydrometeor and the wind filed, hence is a key to understanding TC microphysics. Schematic hydrometeor trajectories were first constructed using 3-D wind field and particle fallspeeds derived from airborne radar observations in a steady-state mature hurricane,Alicia(1983). Since then, little effort has been put in understanding hydrometeor transport in TC, especially the potential link between its evolution and the intensity and structure changes in a non-steady-state TC. This study is focused on investigating such a link by means of numerical simulations of TC Rapid Intensification(RI) using WRF model. We use the tracer utility in WRF to construct hydrometeor trajectories. Most of the popular microphysics schemes are tested, and the most reasonable test( which is determined by comparing the simulated TC intensity and structure with airborne radar observations) and the ensemble mean of all the tests are picked for detailed examinations.
U(1) Wilson lattice gauge theories in digital quantum simulators
NASA Astrophysics Data System (ADS)
Muschik, Christine; Heyl, Markus; Martinez, Esteban; Monz, Thomas; Schindler, Philipp; Vogell, Berit; Dalmonte, Marcello; Hauke, Philipp; Blatt, Rainer; Zoller, Peter
2017-10-01
Lattice gauge theories describe fundamental phenomena in nature, but calculating their real-time dynamics on classical computers is notoriously difficult. In a recent publication (Martinez et al 2016 Nature 534 516), we proposed and experimentally demonstrated a digital quantum simulation of the paradigmatic Schwinger model, a U(1)-Wilson lattice gauge theory describing the interplay between fermionic matter and gauge bosons. Here, we provide a detailed theoretical analysis of the performance and the potential of this protocol. Our strategy is based on analytically integrating out the gauge bosons, which preserves exact gauge invariance but results in complicated long-range interactions between the matter fields. Trapped-ion platforms are naturally suited to implementing these interactions, allowing for an efficient quantum simulation of the model, with a number of gate operations that scales polynomially with system size. Employing numerical simulations, we illustrate that relevant phenomena can be observed in larger experimental systems, using as an example the production of particle-antiparticle pairs after a quantum quench. We investigate theoretically the robustness of the scheme towards generic error sources, and show that near-future experiments can reach regimes where finite-size effects are insignificant. We also discuss the challenges in quantum simulating the continuum limit of the theory. Using our scheme, fundamental phenomena of lattice gauge theories can be probed using a broad set of experimentally accessible observables, including the entanglement entropy and the vacuum persistence amplitude.
Modeling the complex shape evolution of sedimenting particle swarms in fractures
NASA Astrophysics Data System (ADS)
Mitchell, C. A.; Nitsche, L.; Pyrak-Nolte, L. J.
2016-12-01
The flow of micro- and nano-particles through subsurface systems can occur in several environments, such as hydraulic fracturing or enhanced oil recovery. Computer simulations were performed to advance our understanding of the complexity of subsurface particle swarm transport in fractures. Previous experiments observed that particle swarms in fractures with uniform apertures exhibit enhanced transport speeds and suppressed bifurcations for an optimal range of apertures. Numerical simulations were performed for low Reynolds number, no interfacial tension and uniform viscosity conditions with particulate swarms represented by point-particles that mutually interact through their (regularized) Stokeslet fields. A P3 M technique accelerates the summations for swarms exceeding 105 particles. Fracture wall effects were incorporated using a least-squares variant of the method of fundamental solutions, with grid mapping of the surface force and source elements within the fast-summation scheme. The numerical study was executed on the basis of dimensionless variables and parameters, in the interest of examining the fundamental behavior and relationships of particle swarms in the presence of uniform apertures. Model parameters were representative of particle swarms experiments to enable direct comparison of the results with the experimental observations. The simulations confirmed that the principal phenomena observed in the experiments can be explained within the realm of Stokes flow. The numerical investigation effectively replicated swarm evolution in a uniform fracture and captured the coalescence, torus and tail formation, and ultimate breakup of the particle swarm as it fell under gravity in a quiescent fluid. The rate of swarm evolution depended on the number of particles in a swarm. When an ideal number of particles was used, swarm transport was characterized by an enhanced velocity regime as observed in the laboratory data. Understanding the physics particle swarms in fractured media will improve the ability to perform controlled micro-particulate transport through rock. Acknowledgment: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Geosciences Research Program under Award Number (DE-FG02-09ER16022).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Küchlin, Stephan, E-mail: kuechlin@ifd.mavt.ethz.ch; Jenny, Patrick
2017-01-01
A major challenge for the conventional Direct Simulation Monte Carlo (DSMC) technique lies in the fact that its computational cost becomes prohibitive in the near continuum regime, where the Knudsen number (Kn)—characterizing the degree of rarefaction—becomes small. In contrast, the Fokker–Planck (FP) based particle Monte Carlo scheme allows for computationally efficient simulations of rarefied gas flows in the low and intermediate Kn regime. The Fokker–Planck collision operator—instead of performing binary collisions employed by the DSMC method—integrates continuous stochastic processes for the phase space evolution in time. This allows for time step and grid cell sizes larger than the respective collisionalmore » scales required by DSMC. Dynamically switching between the FP and the DSMC collision operators in each computational cell is the basis of the combined FP-DSMC method, which has been proven successful in simulating flows covering the whole Kn range. Until recently, this algorithm had only been applied to two-dimensional test cases. In this contribution, we present the first general purpose implementation of the combined FP-DSMC method. Utilizing both shared- and distributed-memory parallelization, this implementation provides the capability for simulations involving many particles and complex geometries by exploiting state of the art computer cluster technologies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fogarty, Aoife C., E-mail: fogarty@mpip-mainz.mpg.de; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de; Kremer, Kurt, E-mail: kremer@mpip-mainz.mpg.de
A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydrationmore » shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.« less
A flexible algorithm for calculating pair interactions on SIMD architectures
NASA Astrophysics Data System (ADS)
Páll, Szilárd; Hess, Berk
2013-12-01
Calculating interactions or correlations between pairs of particles is typically the most time-consuming task in particle simulation or correlation analysis. Straightforward implementations using a double loop over particle pairs have traditionally worked well, especially since compilers usually do a good job of unrolling the inner loop. In order to reach high performance on modern CPU and accelerator architectures, single-instruction multiple-data (SIMD) parallelization has become essential. Avoiding memory bottlenecks is also increasingly important and requires reducing the ratio of memory to arithmetic operations. Moreover, when pairs only interact within a certain cut-off distance, good SIMD utilization can only be achieved by reordering input and output data, which quickly becomes a limiting factor. Here we present an algorithm for SIMD parallelization based on grouping a fixed number of particles, e.g. 2, 4, or 8, into spatial clusters. Calculating all interactions between particles in a pair of such clusters improves data reuse compared to the traditional scheme and results in a more efficient SIMD parallelization. Adjusting the cluster size allows the algorithm to map to SIMD units of various widths. This flexibility not only enables fast and efficient implementation on current CPUs and accelerator architectures like GPUs or Intel MIC, but it also makes the algorithm future-proof. We present the algorithm with an application to molecular dynamics simulations, where we can also make use of the effective buffering the method introduces.
Turbulent Combustion in SDF Explosions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhl, A L; Bell, J B; Beckner, V E
2009-11-12
A heterogeneous continuum model is proposed to describe the dispersion and combustion of an aluminum particle cloud in an explosion. It combines the gas-dynamic conservation laws for the gas phase with a continuum model for the dispersed phase, as formulated by Nigmatulin. Inter-phase mass, momentum and energy exchange are prescribed by phenomenological models. It incorporates a combustion model based on the mass conservation laws for fuel, air and products; source/sink terms are treated in the fast-chemistry limit appropriate for such gasdynamic fields, along with a model for mass transfer from the particle phase to the gas. The model takes intomore » account both the afterburning of the detonation products of the C-4 booster with air, and the combustion of the Al particles with air. The model equations were integrated by high-order Godunov schemes for both the gas and particle phases. Numerical simulations of the explosion fields from 1.5-g Shock-Dispersed-Fuel (SDF) charge in a 6.6 liter calorimeter were used to validate the combustion model. Then the model was applied to 10-kg Al-SDF explosions in a an unconfined height-of-burst explosion. Computed pressure histories are compared with measured waveforms. Differences are caused by physical-chemical kinetic effects of particle combustion which induce ignition delays in the initial reactive blast wave and quenching of reactions at late times. Current simulations give initial insights into such modeling issues.« less
A correction procedure for thermally two-way coupled point-particles
NASA Astrophysics Data System (ADS)
Horwitz, Jeremy; Ganguli, Swetava; Mani, Ali; Lele, Sanjiva
2017-11-01
Development of a robust procedure for the simulation of two-way coupled particle-laden flows remains a challenge. Such systems are characterized by O(1) or greater mass of particles relative to the fluid. The coupling of fluid and particle motion via a drag model means the undisturbed fluid velocity evaluated at the particle location (which is needed in the drag model) is no longer equal to the interpolated fluid velocity at the particle location. The same issue arises in problems of dispersed flows in the presence of heat transfer. The heat transfer rate to each particle depends on the difference between the particle's temperature and the undisturbed fluid temperature. We borrow ideas from the correction scheme we have developed for particle-fluid momentum coupling by developing a procedure to estimate the undisturbed fluid temperature given the disturbed temperature field created by a point-particle. The procedure is verified for the case of a particle settling under gravity and subject to radiation. The procedure is developed in the low Peclet, low Boussinesq number limit, but we will discuss the applicability of the same correction procedure outside of this regime when augmented by appropriate drag and heat exchange correlations. Supported by DOE, J. H. Supported by NSF GRF
Ahmadi, Sheida; Bowles, Richard K
2017-04-21
Particles confined to a single file, in a narrow quasi-one-dimensional channel, exhibit a dynamic crossover from single file diffusion to Fickian diffusion as the channel radius increases and the particles begin to pass each other. The long time diffusion coefficient for a system in the crossover regime can be described in terms of a hopping time, which measures the time it takes for a particle to escape the cage formed by its neighbours. In this paper, we develop a transition state theory approach to the calculation of the hopping time, using the small system isobaric-isothermal ensemble to rigorously account for the volume fluctuations associated with the size of the cage. We also describe a Monte Carlo simulation scheme that can be used to calculate the free energy barrier for particle hopping. The theory and simulation method correctly predict the hopping times for a two-dimensional confined ideal gas system and a system of confined hard discs over a range of channel radii, but the method breaks down for wide channels in the hard discs' case, underestimating the height of the hopping barrier due to the neglect of interactions between the small system and its surroundings.
Measuring the fine structure constant with Bragg diffraction and Bloch oscillations
NASA Astrophysics Data System (ADS)
Parker, Richard; Yu, Chenghui; Zhong, Weicheng; Estey, Brian; Müller, Holger
2017-04-01
We have demonstrated a new scheme for atom interferometry based on large-momentum-transfer Bragg beam splitters and Bloch oscillations. In this new scheme, we have achieved a resolution of δÎ+/-/Î+/-=0.25ppb in the fine structure constant measurement, which gives over 10 million radians of phase difference between freely evolving matter waves. We have suppressed many systematic effects known in most atom interferometers with Raman beam splitters such as light shift, Zeeman effect shift as well as vibration. We have also simulated multi-atom Bragg diffraction to understand sub-ppb systematic effects, and implemented spatial filtering to further suppress systematic effects. We present our recent progress toward a measurement of the fine structure constant, which will provide a stringent test of the standard model of particle physics.
Hummer, Gerhard
2015-01-01
We present a new algorithm for simulating reaction-diffusion equations at single-particle resolution. Our algorithm is designed to be both accurate and simple to implement, and to be applicable to large and heterogeneous systems, including those arising in systems biology applications. We combine the use of the exact Green's function for a pair of reacting particles with the approximate free-diffusion propagator for position updates to particles. Trajectory reweighting in our free-propagator reweighting (FPR) method recovers the exact association rates for a pair of interacting particles at all times. FPR simulations of many-body systems accurately reproduce the theoretically known dynamic behavior for a variety of different reaction types. FPR does not suffer from the loss of efficiency common to other path-reweighting schemes, first, because corrections apply only in the immediate vicinity of reacting particles and, second, because by construction the average weight factor equals one upon leaving this reaction zone. FPR applications include the modeling of pathways and networks of protein-driven processes where reaction rates can vary widely and thousands of proteins may participate in the formation of large assemblies. With a limited amount of bookkeeping necessary to ensure proper association rates for each reactant pair, FPR can account for changes to reaction rates or diffusion constants as a result of reaction events. Importantly, FPR can also be extended to physical descriptions of protein interactions with long-range forces, as we demonstrate here for Coulombic interactions. PMID:26005592
NASA Astrophysics Data System (ADS)
Chen, G.; Chacón, L.
2014-10-01
A recent proof-of-principle study proposes a nonlinear electrostatic implicit particle-in-cell (PIC) algorithm in one dimension (Chen et al., 2011). The algorithm employs a kinetically enslaved Jacobian-free Newton-Krylov (JFNK) method, and conserves energy and charge to numerical round-off. In this study, we generalize the method to electromagnetic simulations in 1D using the Darwin approximation to Maxwell's equations, which avoids radiative noise issues by ordering out the light wave. An implicit, orbit-averaged, time-space-centered finite difference scheme is employed in both the 1D Darwin field equations (in potential form) and the 1D-3V particle orbit equations to produce a discrete system that remains exactly charge- and energy-conserving. Furthermore, enabled by the implicit Darwin equations, exact conservation of the canonical momentum per particle in any ignorable direction is enforced via a suitable scattering rule for the magnetic field. We have developed a simple preconditioner that targets electrostatic waves and skin currents, and allows us to employ time steps O(√{mi /me } c /veT) larger than the explicit CFL. Several 1D numerical experiments demonstrate the accuracy, performance, and conservation properties of the algorithm. In particular, the scheme is shown to be second-order accurate, and CPU speedups of more than three orders of magnitude vs. an explicit Vlasov-Maxwell solver are demonstrated in the "cold" plasma regime (where kλD ≪ 1).
Multistage Coupling of Laser-Wakefield Accelerators with Curved Plasma Channel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, J.; Chen, M.; Wu, W. Y.
Multistage coupling of laser-wakefield accelerators is essential to overcome laser energy depletion for high-energy applications such as TeV level electron-positron colliders. Current staging schemes feed subsequent laser pulses into stages using plasma mirrors, while controlling electron beam focusing with plasma lenses. Here a more compact and efficient scheme is proposed to realize simultaneous coupling of the electron beam and the laser pulse into a second stage. Furthermore, a curved channel with transition segment is used to guide a fresh laser pulse into a subsequent straight channel, while allowing the electrons to propagate in a straight channel. This scheme then benefitsmore » from a shorter coupling distance and continuous guiding of the electrons in plasma, while suppressing transverse beam dispersion. Within moderate laser parameters, particle-in-cell simulations demonstrate that the electron beam from a previous stage can be efficiently injected into a subsequent stage for further acceleration, while maintaining high capture efficiency, stability, and beam quality.« less
Multistage Coupling of Laser-Wakefield Accelerators with Curved Plasma Channel
Luo, J.; Chen, M.; Wu, W. Y.; ...
2018-04-10
Multistage coupling of laser-wakefield accelerators is essential to overcome laser energy depletion for high-energy applications such as TeV level electron-positron colliders. Current staging schemes feed subsequent laser pulses into stages using plasma mirrors, while controlling electron beam focusing with plasma lenses. Here a more compact and efficient scheme is proposed to realize simultaneous coupling of the electron beam and the laser pulse into a second stage. Furthermore, a curved channel with transition segment is used to guide a fresh laser pulse into a subsequent straight channel, while allowing the electrons to propagate in a straight channel. This scheme then benefitsmore » from a shorter coupling distance and continuous guiding of the electrons in plasma, while suppressing transverse beam dispersion. Within moderate laser parameters, particle-in-cell simulations demonstrate that the electron beam from a previous stage can be efficiently injected into a subsequent stage for further acceleration, while maintaining high capture efficiency, stability, and beam quality.« less
Simulation of PEP-II Accelerator Backgrounds Using TURTLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barlow, R.J.; Fieguth, T.; /SLAC
2006-02-15
We present studies of accelerator-induced backgrounds in the BaBar detector at the SLAC B-Factory, carried out using LPTURTLE, a modified version of the DECAY TURTLE simulation package. Lost-particle backgrounds in PEP-II are dominated by a combination of beam-gas bremstrahlung, beam-gas Coulomb scattering, radiative-Bhabha events and beam-beam blow-up. The radiation damage and detector occupancy caused by the associated electromagnetic shower debris can limit the usable luminosity. In order to understand and mitigate such backgrounds, we have performed a full program of beam-gas and luminosity-background simulations, that include the effects of the detector solenoidal field, detailed modeling of limiting apertures in bothmore » collider rings, and optimization of the betatron collimation scheme in the presence of large transverse tails.« less
NASA Astrophysics Data System (ADS)
Jones, J. D.; Ma, Xia; Clements, B. E.; Gibson, L. L.; Gustavsen, R. L.
2017-06-01
Gas-gun driven plate-impact techniques were used to study the shock to detonation transition in LX-14 (95.5 weight % HMX, 4.5 weight % estane binder). The transition was recorded using embedded electromagnetic particle velocity gauges. Initial shock pressures, P, ranged from 2.5 to 8 GPa and the resulting distances to detonation, xD, were in the range 1.9 to 14 mm. Numerical simulations using the SURF reactive burn scheme coupled with a linear US -up / Mie-Grueneisen equation of state for the reactant and a JWL equation of state for the products, match the experimental data well. Comparison of simulation with experiment as well as the ``best fit'' parameter set for the simulations is presented.
NASA Astrophysics Data System (ADS)
Lin, Ruei-Fong; O'C. Starr, David; Demott, Paul J.; Cotton, Richard; Sassen, Kenneth; Jensen, Eric; Kärcher, Bernd; Liu, Xiaohong
2002-08-01
The Cirrus Parcel Model Comparison Project, a project of the GCSS [Global Energy and Water Cycle Experiment (GEWEX) Cloud System Studies] Working Group on Cirrus Cloud Systems, involves the systematic comparison of current models of ice crystal nucleation and growth for specified, typical, cirrus cloud environments. In Phase 1 of the project reported here, simulated cirrus cloud microphysical properties from seven models are compared for `warm' (40°C) and `cold' (60°C) cirrus, each subject to updrafts of 0.04, 0.2, and 1 m s1. The models employ explicit microphysical schemes wherein the size distribution of each class of particles (aerosols and ice crystals) is resolved into bins or the evolution of each individual particle is traced. Simulations are made including both homogeneous and heterogeneous ice nucleation mechanisms (all-mode simulations). A single initial aerosol population of sulfuric acid particles is prescribed for all simulations. Heterogeneous nucleation is disabled for a second parallel set of simulations in order to isolate the treatment of the homogeneous freezing (of haze droplets) nucleation process. Analysis of these latter simulations is the primary focus of this paper.Qualitative agreement is found for the homogeneous-nucleation-only simulations; for example, the number density of nucleated ice crystals increases with the strength of the prescribed updraft. However, significant quantitative differences are found. Detailed analysis reveals that the homogeneous nucleation rate, haze particle solution concentration, and water vapor uptake rate by ice crystal growth (particularly as controlled by the deposition coefficient) are critical components that lead to differences in the predicted microphysics.Systematic differences exist between results based on a modified classical theory approach and models using an effective freezing temperature approach to the treatment of nucleation. Each method is constrained by critical freezing data from laboratory studies, but each includes assumptions that can only be justified by further laboratory research. Consequently, it is not yet clear if the two approaches can be made consistent. Large haze particles may deviate considerably from equilibrium size in moderate to strong updrafts (0.2-1 m s1) at 60°C. The equilibrium assumption is commonly invoked in cirrus parcel models. The resulting difference in particle-size-dependent solution concentration of haze particles may significantly affect the ice particle formation rate during the initial nucleation interval. The uptake rate for water vapor excess by ice crystals is another key component regulating the total number of nucleated ice crystals. This rate, the product of particle number concentration and ice crystal diffusional growth rate, which is particularly sensitive to the deposition coefficient when ice particles are small, modulates the peak particle formation rate achieved in an air parcel and the duration of the active nucleation time period. The consequent differences in cloud microphysical properties, and thus cloud optical properties, between state-of-the-art models of ice crystal initiation are significant.Intermodel differences in the case of all-mode simulations are correspondingly greater than in the case of homogeneous nucleation acting alone. Definitive laboratory and atmospheric benchmark data are needed to improve the treatment of heterogeneous nucleation processes.
NASA Astrophysics Data System (ADS)
Madhulatha, A.; Rajeevan, M.
2018-02-01
Main objective of the present paper is to examine the role of various parameterization schemes in simulating the evolution of mesoscale convective system (MCS) occurred over south-east India. Using the Weather Research and Forecasting (WRF) model, numerical experiments are conducted by considering various planetary boundary layer, microphysics, and cumulus parameterization schemes. Performances of different schemes are evaluated by examining boundary layer, reflectivity, and precipitation features of MCS using ground-based and satellite observations. Among various physical parameterization schemes, Mellor-Yamada-Janjic (MYJ) boundary layer scheme is able to produce deep boundary layer height by simulating warm temperatures necessary for storm initiation; Thompson (THM) microphysics scheme is capable to simulate the reflectivity by reasonable distribution of different hydrometeors during various stages of system; Betts-Miller-Janjic (BMJ) cumulus scheme is able to capture the precipitation by proper representation of convective instability associated with MCS. Present analysis suggests that MYJ, a local turbulent kinetic energy boundary layer scheme, which accounts strong vertical mixing; THM, a six-class hybrid moment microphysics scheme, which considers number concentration along with mixing ratio of rain hydrometeors; and BMJ, a closure cumulus scheme, which adjusts thermodynamic profiles based on climatological profiles might have contributed for better performance of respective model simulations. Numerical simulation carried out using the above combination of schemes is able to capture storm initiation, propagation, surface variations, thermodynamic structure, and precipitation features reasonably well. This study clearly demonstrates that the simulation of MCS characteristics is highly sensitive to the choice of parameterization schemes.
NASA Astrophysics Data System (ADS)
Grazier, Kevin R.; Newman, William I.; Varadi, Ferenc; Kaula, William M.; Hyman, James M.
1999-08-01
We report on numerical simulations exploring the dynamical stability of planetesimals in the gaps between the outer Solar System planets. We search for stable niches in the Saturn/Uranus and Uranus/Neptune zones by employing 10,000 massless particles-many more than previous studies in these two zones-using high-order optimized multistep integration schemes coupled with roundoff error minimizing methods. An additional feature of this study, differing from its predecessors, is the fact that our initial distributions contain particles on orbits which are both inclined and noncircular. These initial distributions were also Gaussian distributed such that the Gaussian peaks were at the midpoint between the neighboring perturbers. The simulations showed an initial transient phase where the bulk of the primordial planetesimal swarm was removed from the Solar System within 105 years. This is about 10 times longer than we observed in our previous Jupiter/Saturn studies. Next, there was a gravitational relaxation phase where the particles underwent a random walk in momentum space and were exponentially eliminated by random encounters with the planets. Unlike our previous Jupiter/Saturn simulation, the particles did not fully relax into a third Lagrangian niche phase where long-lived particles are at Lagrange points or stable niches. This is either because the Lagrangian niche phase never occurs or because these simulations did not have enough particles for this third phase to manifest. In these simulations, there was a general trend for the particles to migrate outward and eventually to be cleared out by the outermost planet in the zone. We confirmed that particles with higher eccentricities had shorter lifetimes and that the resonances between the jovian planets "pumped up" the eccentricities of the planetesimals with low-inclination orbits more than those with higher inclinations. We estimated the expected lifetime of particles using kinetic theory and even though the time scale of the Uranus/Neptune simulation was 380 times longer than our previous Jupiter/Saturn simulation, the planetesimals in the Uranus/Neptune zone were cleared out more quickly than those in the Saturn/Uranus zone because of the positions of resonances with the jovian planets. These resonances had an even greater effect than random gravitational stirring in the winnowing process and confirm that all the jovian planets are necessary in long simulations. Even though we observed several long-lived zones near 12.5, 14.4, 16, 24.5, and 26 AU, only two particles remained at the end of the 109-year integration: one near the 2 : 3 Saturn resonance, and the other near the Neptune 1 : 1 resonance. This suggests that niches for planetesimal material in the jovian planets are rare and may exist either only in extremely narrow bands or in the neighborhoods of the triangular Lagrange points of the outer planets.
SHARP: A Spatially Higher-order, Relativistic Particle-in-cell Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shalaby, Mohamad; Broderick, Avery E.; Chang, Philip
Numerical heating in particle-in-cell (PIC) codes currently precludes the accurate simulation of cold, relativistic plasma over long periods, severely limiting their applications in astrophysical environments. We present a spatially higher-order accurate relativistic PIC algorithm in one spatial dimension, which conserves charge and momentum exactly. We utilize the smoothness implied by the usage of higher-order interpolation functions to achieve a spatially higher-order accurate algorithm (up to the fifth order). We validate our algorithm against several test problems—thermal stability of stationary plasma, stability of linear plasma waves, and two-stream instability in the relativistic and non-relativistic regimes. Comparing our simulations to exact solutionsmore » of the dispersion relations, we demonstrate that SHARP can quantitatively reproduce important kinetic features of the linear regime. Our simulations have a superior ability to control energy non-conservation and avoid numerical heating in comparison to common second-order schemes. We provide a natural definition for convergence of a general PIC algorithm: the complement of physical modes captured by the simulation, i.e., those that lie above the Poisson noise, must grow commensurately with the resolution. This implies that it is necessary to simultaneously increase the number of particles per cell and decrease the cell size. We demonstrate that traditional ways for testing for convergence fail, leading to plateauing of the energy error. This new PIC code enables us to faithfully study the long-term evolution of plasma problems that require absolute control of the energy and momentum conservation.« less
NASA Astrophysics Data System (ADS)
Fogarty, Aoife C.; Potestio, Raffaello; Kremer, Kurt
2015-05-01
A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.
Multi-dimensional PIC-simulations of parametric instabilities for shock-ignition conditions
NASA Astrophysics Data System (ADS)
Riconda, C.; Weber, S.; Klimo, O.; Héron, A.; Tikhonchuk, V. T.
2013-11-01
Laser-plasma interaction is investigated for conditions relevant for the shock-ignition (SI) scheme of inertial confinement fusion using two-dimensional particle-in-cell (PIC) simulations of an intense laser beam propagating in a hot, large-scale, non-uniform plasma. The temporal evolution and interdependence of Raman- (SRS), and Brillouin- (SBS), side/backscattering as well as Two-Plasmon-Decay (TPD) are studied. TPD is developing in concomitance with SRS creating a broad spectrum of plasma waves near the quarter-critical density. They are rapidly saturated due to plasma cavitation within a few picoseconds. The hot electron spectrum created by SRS and TPD is relatively soft, limited to energies below one hundred keV.
Transport, noise, and conservation properties in gyrokinetic plasmas
NASA Astrophysics Data System (ADS)
Jenkins, Thomas
2005-10-01
The relationship between various transport properties (such as particle and heat flux, entropy production, heating, and collisional dissipation) [1] is examined in electrostatic gyrokinetic simulations of ITG modes in simple geometry. The effect of the parallel velocity nonlinearity on the achievement of steady-state solutions and the transport properties of these solutions is examined; the effects of nonadiabatic electrons are also considered. We also examine the effectiveness of the electromagnetic split-weight scheme [2] in reducing the noise and improving the conservation properties (energy, momentum, particle number, etc.) of gyrokinetic plasmas. [1] W. W. Lee and W. M. Tang, Phys. Fluids 31, 612 (1988). [2] W. W. Lee, J. L. V. Lewandowski, T. S. Hahm, and Z.Lin, Phys. Plasmas 8, 4435 (2001).
Structure for Storing Properties of Particles (PoP)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, N. R.; Mattoon, C. M.; Beck, B. R.
2014-06-01
Some evaluated nuclear databases are critical for applications such as nuclear energy, nuclear medicine, homeland security, and stockpile stewardship. Particle masses, nuclear excitation levels, and other “Properties of Particles” are essential for making evaluated nuclear databases. Currently, these properties are obtained from various databases that are stored in outdated formats. Moreover, the “Properties of Particles” (PoP) structure is being designed that will allow storing all information for one or more particles in a single place, so that each evaluation, simulation, model calculation, etc. can link to the same data. Information provided in PoP will include properties of nuclei, gammas andmore » electrons (along with other particles such as pions, as evaluations extend to higher energies). Presently, PoP includes masses from the Atomic Mass Evaluation version 2003 (AME2003), and level schemes and gamma decays from the Reference Input Parameter Library (RIPL-3). The data are stored in a hierarchical structure. An example of how PoP stores nuclear masses and energy levels will be presented here.« less
Particle-in-cell simulations with charge-conserving current deposition on graphic processing units
NASA Astrophysics Data System (ADS)
Ren, Chuang; Kong, Xianglong; Huang, Michael; Decyk, Viktor; Mori, Warren
2011-10-01
Recently using CUDA, we have developed an electromagnetic Particle-in-Cell (PIC) code with charge-conserving current deposition for Nvidia graphic processing units (GPU's) (Kong et al., Journal of Computational Physics 230, 1676 (2011). On a Tesla M2050 (Fermi) card, the GPU PIC code can achieve a one-particle-step process time of 1.2 - 3.2 ns in 2D and 2.3 - 7.2 ns in 3D, depending on plasma temperatures. In this talk we will discuss novel algorithms for GPU-PIC including charge-conserving current deposition scheme with few branching and parallel particle sorting. These algorithms have made efficient use of the GPU shared memory. We will also discuss how to replace the computation kernels of existing parallel CPU codes while keeping their parallel structures. This work was supported by U.S. Department of Energy under Grant Nos. DE-FG02-06ER54879 and DE-FC02-04ER54789 and by NSF under Grant Nos. PHY-0903797 and CCF-0747324.
Structure for Storing Properties of Particles (PoP)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, N.R., E-mail: infinidhi@llnl.gov; Mattoon, C.M.; Beck, B.R.
2014-06-15
Evaluated nuclear databases are critical for applications such as nuclear energy, nuclear medicine, homeland security, and stockpile stewardship. Particle masses, nuclear excitation levels, and other “Properties of Particles” are essential for making evaluated nuclear databases. Currently, these properties are obtained from various databases that are stored in outdated formats. A “Properties of Particles” (PoP) structure is being designed that will allow storing all information for one or more particles in a single place, so that each evaluation, simulation, model calculation, etc. can link to the same data. Information provided in PoP will include properties of nuclei, gammas and electrons (alongmore » with other particles such as pions, as evaluations extend to higher energies). Presently, PoP includes masses from the Atomic Mass Evaluation version 2003 (AME2003), and level schemes and gamma decays from the Reference Input Parameter Library (RIPL-3). The data are stored in a hierarchical structure. An example of how PoP stores nuclear masses and energy levels will be presented here.« less
Controlled quantum perfect teleportation of multiple arbitrary multi-qubit states
NASA Astrophysics Data System (ADS)
Shi, Runhua; Huang, Liusheng; Yang, Wei; Zhong, Hong
2011-12-01
We present an efficient controlled quantum perfect teleportation scheme. In our scheme, multiple senders can teleport multiple arbitrary unknown multi-qubit states to a single receiver via a previously shared entanglement state with the help of one or more controllers. Furthermore, our scheme has a very good performance in the measurement and operation complexity, since it only needs to perform Bell state and single-particle measurements and to apply Controlled-Not gate and other single-particle unitary operations. In addition, compared with traditional schemes, our scheme needs less qubits as the quantum resources and exchanges less classical information, and thus obtains higher communication efficiency.
The small-scale turbulent dynamo in smoothed particle magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Tricco, T. S.; Price, D. J.; Federrath, C.
2016-05-01
Supersonic turbulence is believed to be at the heart of star formation. We have performed smoothed particle magnetohydrodynamics (SPMHD) simulations of the small- scale dynamo amplification of magnetic fields in supersonic turbulence. The calculations use isothermal gas driven at rms velocity of Mach 10 so that conditions are representative of starforming molecular clouds in the Milky Way. The growth of magnetic energy is followed for 10 orders in magnitude until it reaches saturation, a few percent of the kinetic energy. The results of our dynamo calculations are compared with results from grid-based methods, finding excellent agreement on their statistics and their qualitative behaviour. The simulations utilise the latest algorithmic developments we have developed, in particular, a new divergence cleaning approach to maintain the solenoidal constraint on the magnetic field and a method to reduce the numerical dissipation of the magnetic shock capturing scheme. We demonstrate that our divergence cleaning method may be used to achieve ∇ • B = 0 to machine precision, albeit at significant computational expense.
Optimizing the Performance of Reactive Molecular Dynamics Simulations for Multi-core Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aktulga, Hasan Metin; Coffman, Paul; Shan, Tzu-Ray
2015-12-01
Hybrid parallelism allows high performance computing applications to better leverage the increasing on-node parallelism of modern supercomputers. In this paper, we present a hybrid parallel implementation of the widely used LAMMPS/ReaxC package, where the construction of bonded and nonbonded lists and evaluation of complex ReaxFF interactions are implemented efficiently using OpenMP parallelism. Additionally, the performance of the QEq charge equilibration scheme is examined and a dual-solver is implemented. We present the performance of the resulting ReaxC-OMP package on a state-of-the-art multi-core architecture Mira, an IBM BlueGene/Q supercomputer. For system sizes ranging from 32 thousand to 16.6 million particles, speedups inmore » the range of 1.5-4.5x are observed using the new ReaxC-OMP software. Sustained performance improvements have been observed for up to 262,144 cores (1,048,576 processes) of Mira with a weak scaling efficiency of 91.5% in larger simulations containing 16.6 million particles.« less
NASA Astrophysics Data System (ADS)
Riest, Jonas; Nägele, Gerhard; Liu, Yun; Wagner, Norman J.; Godfrin, P. Douglas
2018-02-01
Recently, atypical static features of microstructural ordering in low-salinity lysozyme protein solutions have been extensively explored experimentally and explained theoretically based on a short-range attractive plus long-range repulsive (SALR) interaction potential. However, the protein dynamics and the relationship to the atypical SALR structure remain to be demonstrated. Here, the applicability of semi-analytic theoretical methods predicting diffusion properties and viscosity in isotropic particle suspensions to low-salinity lysozyme protein solutions is tested. Using the interaction potential parameters previously obtained from static structure factor measurements, our results of Monte Carlo simulations representing seven experimental lysoyzme samples indicate that they exist either in dispersed fluid or random percolated states. The self-consistent Zerah-Hansen scheme is used to describe the static structure factor, S(q), which is the input to our calculation schemes for the short-time hydrodynamic function, H(q), and the zero-frequency viscosity η. The schemes account for hydrodynamic interactions included on an approximate level. Theoretical predictions for H(q) as a function of the wavenumber q quantitatively agree with experimental results at small protein concentrations obtained using neutron spin echo measurements. At higher concentrations, qualitative agreement is preserved although the calculated hydrodynamic functions are overestimated. We attribute the differences for higher concentrations and lower temperatures to translational-rotational diffusion coupling induced by the shape and interaction anisotropy of particles and clusters, patchiness of the lysozyme particle surfaces, and the intra-cluster dynamics, features not included in our simple globular particle model. The theoretical results for the solution viscosity, η, are in qualitative agreement with our experimental data even at higher concentrations. We demonstrate that semi-quantitative predictions of diffusion properties and viscosity of solutions of globular proteins are possible given only the equilibrium structure factor of proteins. Furthermore, we explore the effects of changing the attraction strength on H(q) and η.
NASA Astrophysics Data System (ADS)
Lee, H.-H.; Chen, S.-H.; Kleeman, M. J.; Zhang, H.; DeNero, S. P.; Joe, D. K.
2015-11-01
The source-oriented Weather Research and Forecasting chemistry model (SOWC) was modified to include warm cloud processes and applied to investigate how aerosol mixing states influence fog formation and optical properties in the atmosphere. SOWC tracks a 6-dimensional chemical variable (X, Z, Y, Size Bins, Source Types, Species) through an explicit simulation of atmospheric chemistry and physics. A source-oriented cloud condensation nuclei module was implemented into the SOWC model to simulate warm clouds using the modified two-moment Purdue Lin microphysics scheme. The Goddard shortwave and longwave radiation schemes were modified to interact with source-oriented aerosols and cloud droplets so that aerosol direct and indirect effects could be studied. The enhanced SOWC model was applied to study a fog event that occurred on 17 January 2011, in the Central Valley of California. Tule fog occurred because an atmospheric river effectively advected high moisture into the Central Valley and nighttime drainage flow brought cold air from mountains into the valley. The SOWC model produced reasonable liquid water path, spatial distribution and duration of fog events. The inclusion of aerosol-radiation interaction only slightly modified simulation results since cloud optical thickness dominated the radiation budget in fog events. The source-oriented mixture representation of particles reduced cloud droplet number relative to the internal mixture approach that artificially coats hydrophobic particles with hygroscopic components. The fraction of aerosols activating into CCN at a supersaturation of 0.5 % in the Central Valley decreased from 94 % in the internal mixture model to 80 % in the source-oriented model. This increased surface energy flux by 3-5 W m-2 and surface temperature by as much as 0.25 K in the daytime.
Anisotropic thermal conduction with magnetic fields in galaxy clusters
NASA Astrophysics Data System (ADS)
Arth, Alexander; Dolag, Klaus; Beck, Alexander; Petkova, Margarita; Lesch, Harald
2015-08-01
Magnetic fields play an important role for the propagation and diffusion of charged particles, which are responsible for thermal conduction. In this poster, we present an implementation of thermal conduction including the anisotropic effects of magnetic fields for smoothed particle hydrodynamics (SPH). The anisotropic thermal conduction is mainly proceeding parallel to magnetic fields and suppressed perpendicular to the fields. We derive the SPH formalism for the anisotropic heat transport and solve the corresponding equation with an implicit conjugate gradient scheme. We discuss several issues of unphysical heat transport in the cases of extreme ansiotropies or unmagnetized regions and present possible numerical workarounds. We implement our algorithm into the cosmological simulation code GADGET and study its behaviour in several test cases. In general, we reproduce the analytical solutions of our idealised test problems, and obtain good results in cosmological simulations of galaxy cluster formations. Within galaxy clusters, the anisotropic conduction produces a net heat transport similar to an isotropic Spitzer conduction model with low efficiency. In contrast to isotropic conduction our new formalism allows small-scale structure in the temperature distribution to remain stable, because of their decoupling caused by magnetic field lines. Compared to observations, strong isotropic conduction leads to an oversmoothed temperature distribution within clusters, while the results obtained with anisotropic thermal conduction reproduce the observed temperature fluctuations well. A proper treatment of heat transport is crucial especially in the outskirts of clusters and also in high density regions. It's connection to the local dynamical state of the cluster also might contribute to the observed bimodal distribution of cool core and non cool core clusters. Our new scheme significantly advances the modelling of thermal conduction in numerical simulations and overall gives better results compared to observations.
Laser-plasma interactions with a Fourier-Bessel particle-in-cell method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andriyash, Igor A., E-mail: igor.andriyash@gmail.com; LOA, ENSTA ParisTech, CNRS, Ecole polytechnique, Université Paris-Saclay, 828 bd des Maréchaux, 91762 Palaiseau cedex; Lehe, Remi
A new spectral particle-in-cell (PIC) method for plasma modeling is presented and discussed. In the proposed scheme, the Fourier-Bessel transform is used to translate the Maxwell equations to the quasi-cylindrical spectral domain. In this domain, the equations are solved analytically in time, and the spatial derivatives are approximated with high accuracy. In contrast to the finite-difference time domain (FDTD) methods, that are used commonly in PIC, the developed method does not produce numerical dispersion and does not involve grid staggering for the electric and magnetic fields. These features are especially valuable in modeling the wakefield acceleration of particles in plasmas.more » The proposed algorithm is implemented in the code PLARES-PIC, and the test simulations of laser plasma interactions are compared to the ones done with the quasi-cylindrical FDTD PIC code CALDER-CIRC.« less
Stochastic competitive learning in complex networks.
Silva, Thiago Christiano; Zhao, Liang
2012-03-01
Competitive learning is an important machine learning approach which is widely employed in artificial neural networks. In this paper, we present a rigorous definition of a new type of competitive learning scheme realized on large-scale networks. The model consists of several particles walking within the network and competing with each other to occupy as many nodes as possible, while attempting to reject intruder particles. The particle's walking rule is composed of a stochastic combination of random and preferential movements. The model has been applied to solve community detection and data clustering problems. Computer simulations reveal that the proposed technique presents high precision of community and cluster detections, as well as low computational complexity. Moreover, we have developed an efficient method for estimating the most likely number of clusters by using an evaluator index that monitors the information generated by the competition process itself. We hope this paper will provide an alternative way to the study of competitive learning..
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katsouleas, Thomas; Decyk, Viktor
Final Report for grant DE-FG02-06ER54888, "Simulation of Beam-Electron Cloud Interactions in Circular Accelerators Using Plasma Models" Viktor K. Decyk, University of California, Los Angeles Los Angeles, CA 90095-1547 The primary goal of this collaborative proposal was to modify the code QuickPIC and apply it to study the long-time stability of beam propagation in low density electron clouds present in circular accelerators. The UCLA contribution to this collaborative proposal was in supporting the development of the pipelining scheme for the QuickPIC code, which extended the parallel scaling of this code by two orders of magnitude. The USC work was as describedmore » here the PhD research for Ms. Bing Feng, lead author in reference 2 below, who performed the research at USC under the guidance of the PI Tom Katsouleas and the collaboration of Dr. Decyk The QuickPIC code [1] is a multi-scale Particle-in-Cell (PIC) code. The outer 3D code contains a beam which propagates through a long region of plasma and evolves slowly. The plasma response to this beam is modeled by slices of a 2D plasma code. This plasma response then is fed back to the beam code, and the process repeats. The pipelining is based on the observation that once the beam has passed a 2D slice, its response can be fed back to the beam immediately without waiting for the beam to pass all the other slices. Thus independent blocks of 2D slices from different time steps can be running simultaneously. The major difficulty was when particles at the edges needed to communicate with other blocks. Two versions of the pipelining scheme were developed, for the the full quasi-static code and the other for the basic quasi-static code used by this e-cloud proposal. Details of the pipelining scheme were published in [2]. The new version of QuickPIC was able to run with more than 1,000 processors, and was successfully applied in modeling e-clouds by our collaborators in this proposal [3-8]. Jean-Luc Vay at Lawrence Berkeley National Lab later implemented a similar basic quasistatic scheme including pipelining in the code WARP [9] and found good to very good quantitative agreement between the two codes in modeling e-clouds. References [1] C. Huang, V. K. Decyk, C. Ren, M. Zhou, W. Lu, W. B. Mori, J. H. Cooley, T. M. Antonsen, Jr., and T. Katsouleas, "QUICKPIC: A highly efficient particle-in-cell code for modeling wakefield acceleration in plasmas," J. Computational Phys. 217, 658 (2006). [2] B. Feng, C. Huang, V. K. Decyk, W. B. Mori, P. Muggli, and T. Katsouleas, "Enhancing parallel quasi-static particle-in-cell simulations with a pipelining algorithm," J. Computational Phys, 228, 5430 (2009). [3] C. Huang, V. K. Decyk, M. Zhou, W. Lu, W. B. Mori, J. H. Cooley, T. M. Antonsen, Jr., and B. Feng, T. Katsouleas, J. Vieira, and L. O. Silva, "QUICKPIC: A highly efficient fully parallelized PIC code for plasma-based acceleration," Proc. of the SciDAC 2006 Conf., Denver, Colorado, June, 2006 [Journal of Physics: Conference Series, W. M. Tang, Editor, vol. 46, Institute of Physics, Bristol and Philadelphia, 2006], p. 190. [4] B. Feng, C. Huang, V. Decyk, W. B. Mori, T. Katsouleas, P. Muggli, "Enhancing Plasma Wakefield and E-cloud Simulation Performance Using a Pipelining Algorithm," Proc. 12th Workshop on Advanced Accelerator Concepts, Lake Geneva, WI, July, 2006, p. 201 [AIP Conf. Proceedings, vol. 877, Melville, NY, 2006]. [5] B. Feng, P. Muggli, T. Katsouleas, V. Decyk, C. Huang, and W. Mori, "Long Time Electron Cloud Instability Simulation Using QuickPIC with Pipelining Algorithm," Proc. of the 2007 Particle Accelerator Conference, Albuquerque, NM, June, 2007, p. 3615. [6] B. Feng, C. Huang, V. Decyk, W. B. Mori, G. H. Hoffstaetter, P. Muggli, T. Katsouleas, "Simulation of Electron Cloud Effects on Electron Beam at ERL with Pipelined QuickPIC," Proc. 13th Workshop on Advanced Accelerator Concepts, Santa Cruz, CA, July-August, 2008, p. 340 [AIP Conf. Proceedings, vol. 1086, Melville, NY, 2008]. [7] B. Feng, C. Huang, V. K. Decyk, W. B. Mori, P. Muggli, and T. Katsouleas, "Enhancing parallel quasi-static particle-in-cell simulations with a pipelining algorithm," J. Computational Phys, 228, 5430 (2009). [8] C. Huang, W. An, V. K. Decyk, W. Lu, W. B. Mori, F. S. Tsung, M. Tzoufras, S. Morshed, T. Antonsen, B. Feng, T. Katsouleas, R., A. Fonseca, S. F. Martins, J. Vieira, L. O. Silva, E. Esarey, C. G. R. Geddes, W. P. Leemans, E. Cormier-Michel, J.-L. Vay, D. L. Bruhwiler, B. Cowan, J. R. Cary, and K. Paul, "Recent results and future challenges for large scale particleion- cell simulations of plasma-based accelerator concepts," Proc. of the SciDAC 2009 Conf., San Diego, CA, June, 2009 [Journal of Physics: Conference Series, vol. 180, Institute of Physics, Bristol and Philadelphia, 2009], p. 012005. [9] J.-L. Vay, C. M. Celata, M. A. Furman, G. Penn, M. Venturini, D. P. Grote, and K. G. Sonnad, ?Update on Electron-Cloud Simulations Using the Package WARP-POSINST.? Proc. of the 2009 Particle Accelerator Conference PAC09, Vancouver, Canada, June, 2009, paper FR5RFP078.« less
Investigation of nonlinear motion simulator washout schemes
NASA Technical Reports Server (NTRS)
Riedel, S. A.; Hofmann, L. G.
1978-01-01
An overview is presented of some of the promising washout schemes which have been devised. The four schemes presented fall into two basic configurations; crossfeed and crossproduct. Various nonlinear modifications further differentiate the four schemes. One nonlinear scheme is discussed in detail. This washout scheme takes advantage of subliminal motions to speed up simulator cab centering. It exploits so-called perceptual indifference thresholds to center the simulator cab at a faster rate whenever the input to the simulator is below the perceptual indifference level. The effect is to reduce the angular and translational simulation motion by comparison with that for the linear washout case. Finally, the conclusions and implications for further research in the area of nonlinear washout filters are presented.
Non-Maxwellian fast particle effects in gyrokinetic GENE simulations
NASA Astrophysics Data System (ADS)
Di Siena, A.; Görler, T.; Doerk, H.; Bilato, R.; Citrin, J.; Johnson, T.; Schneider, M.; Poli, E.; JET Contributors
2018-04-01
Fast ions have recently been found to significantly impact and partially suppress plasma turbulence both in experimental and numerical studies in a number of scenarios. Understanding the underlying physics and identifying the range of their beneficial effect is an essential task for future fusion reactors, where highly energetic ions are generated through fusion reactions and external heating schemes. However, in many of the gyrokinetic codes fast ions are, for simplicity, treated as equivalent-Maxwellian-distributed particle species, although it is well known that to rigorously model highly non-thermalised particles, a non-Maxwellian background distribution function is needed. To study the impact of this assumption, the gyrokinetic code GENE has recently been extended to support arbitrary background distribution functions which might be either analytical, e.g., slowing down and bi-Maxwellian, or obtained from numerical fast ion models. A particular JET plasma with strong fast-ion related turbulence suppression is revised with these new code capabilities both with linear and nonlinear gyrokinetic simulations. It appears that the fast ion stabilization tends to be less strong but still substantial with more realistic distributions, and this improves the quantitative power balance agreement with experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candel, A.; Kabel, A.; Lee, L.
Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell)more » approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.« less
Beam-plasma coupling physics in support of active experiments
NASA Astrophysics Data System (ADS)
Yakymenko, K.; Delzanno, G. L.; Roytershteyn, V.
2017-12-01
The recent development of compact relativistic accelerators might open up a new era of active experiments in space, driven by important scientific and national security applications. Examples include using electron beams to trace magnetic field lines and establish causality between physical processes occurring in the magnetosphere and those in the ionosphere. Another example is the use of electron beams to trigger waves in the near-Earth environment. Waves could induce pitch-angle scattering and precipitation of energetic electrons, acting as an effective radiation belt remediation scheme. In this work, we revisit the coupling between an electron beam and a magnetized plasma in the framework of linear cold-plasma theory. We show that coupling can occur through two different regimes. In the first, a non-relativistic beam radiates through whistler waves. This is well known, and was in fact the focus of many rockets and space-shuttle campaigns aimed at demonstrating whistler emissions in the eighties. In the second regime, the beam radiates through extraordinary (R-X) modes. Nonlinear simulations with a highly-accurate Vlasov code support the theoretical results qualitatively and demonstrate that the radiated power through R-X modes can be much larger than in the whistler regime. Test-particle simulations in the wave electromagnetic field will also be presented to assess the efficiency of these waves in inducing pitch-angle scattering via wave-particle interactions. Finally, the implications of these results for a rocket active experiment in the ionosphere and for a radiation belt remediation scheme will be discussed.
ls1 mardyn: The Massively Parallel Molecular Dynamics Code for Large Systems.
Niethammer, Christoph; Becker, Stefan; Bernreuther, Martin; Buchholz, Martin; Eckhardt, Wolfgang; Heinecke, Alexander; Werth, Stephan; Bungartz, Hans-Joachim; Glass, Colin W; Hasse, Hans; Vrabec, Jadran; Horsch, Martin
2014-10-14
The molecular dynamics simulation code ls1 mardyn is presented. It is a highly scalable code, optimized for massively parallel execution on supercomputing architectures and currently holds the world record for the largest molecular simulation with over four trillion particles. It enables the application of pair potentials to length and time scales that were previously out of scope for molecular dynamics simulation. With an efficient dynamic load balancing scheme, it delivers high scalability even for challenging heterogeneous configurations. Presently, multicenter rigid potential models based on Lennard-Jones sites, point charges, and higher-order polarities are supported. Due to its modular design, ls1 mardyn can be extended to new physical models, methods, and algorithms, allowing future users to tailor it to suit their respective needs. Possible applications include scenarios with complex geometries, such as fluids at interfaces, as well as nonequilibrium molecular dynamics simulation of heat and mass transfer.
Sensitivity of simulated snow cloud properties to mass-diameter parameterizations.
NASA Astrophysics Data System (ADS)
Duffy, G.; Nesbitt, S. W.; McFarquhar, G. M.
2015-12-01
Mass to diameter (m-D) relationships are used in model parameterization schemes to represent ice cloud microphysics and in retrievals of bulk cloud properties from remote sensing instruments. One of the most common relationships, used in the current Global Precipitation Measurement retrieval algorithm for example, assigns the density of snow as a constant tenth of the density of ice (0.1g/m^3). This assumption stands in contrast to the results of derived m-D relationships of snow particles, which imply decreasing particle densities at larger sizes and result in particle masses orders of magnitude below the constant density relationship. In this study, forward simulations of bulk cloud properties (e.g., total water content, radar reflectivity and precipitation rate) derived from measured size distributions using several historical m-D relationships are presented. This expands upon previous studies that mainly focused on smaller ice particles because of the examination of precipitation-sized particles here. In situ and remote sensing data from the GPM Cold season Experiment (GCPEx) and Canadian CloudSAT/Calypso Validation Program (C3VP), both synoptic snowstorm field experiments in southern Ontario, Canada, are used to evaluate the forward simulations against total water content measured by the Nevzorov and Cloud Spectrometer and Impactor (CSI) probe, radar reflectivity measured by a C band ground based radar and a nadir pointing Ku/Ka dual frequency airborne radar, and precipitation rate measured by a 2D video disdrometer. There are differences between the bulk cloud properties derived using varying m-D relations, with constant density assumptions producing results differing substantially from the bulk measured quantities. The variability in bulk cloud properties derived using different m-D relations is compared against the natural variability in those parameters seen in the GCPEx and C3VP field experiments.
NASA Astrophysics Data System (ADS)
Yang, Xiong; Cheng, Mousen; Guo, Dawei; Wang, Moge; Li, Xiaokang
2017-10-01
On the basis of considering electrochemical reactions and collision relations in detail, a direct numerical simulation model of a helicon plasma discharge with three-dimensional two-fluid equations was employed to study the characteristics of the temporal evolution of particle density and electron temperature. With the assumption of weak ionization, the Maxwell equations coupled with the plasma parameters were directly solved in the whole computational domain. All of the partial differential equations were solved by the finite element solver in COMSOL MultiphysicsTM with a fully coupled method. In this work, the numerical cases were calculated with an Ar working medium and a Shoji-type antenna. The numerical results indicate that there exist two distinct modes of temporal evolution of the electron and ground atom density, which can be explained by the ion pumping effect. The evolution of the electron temperature is controlled by two schemes: electromagnetic wave heating and particle collision cooling. The high RF power results in a high peak electron temperature while the high gas pressure leads to a low steady temperature. In addition, an OES experiment using nine Ar I lines was conducted using a modified CR model to verify the validity of the results by simulation, showing that the trends of temporal evolution of electron density and temperature are well consistent with the numerically simulated ones.
Granular materials interacting with thin flexible rods
NASA Astrophysics Data System (ADS)
Neto, Alfredo Gay; Campello, Eduardo M. B.
2017-04-01
In this work, we develop a computational model for the simulation of problems wherein granular materials interact with thin flexible rods. We treat granular materials as a collection of spherical particles following a discrete element method (DEM) approach, while flexible rods are described by a large deformation finite element (FEM) rod formulation. Grain-to-grain, grain-to-rod, and rod-to-rod contacts are fully permitted and resolved. A simple and efficient strategy is proposed for coupling the motion of the two types (discrete and continuum) of materials within an iterative time-stepping solution scheme. Implementation details are shown and discussed. Validity and applicability of the model are assessed by means of a few numerical examples. We believe that robust, efficiently coupled DEM-FEM schemes can be a useful tool to the simulation of problems wherein granular materials interact with thin flexible rods, such as (but not limited to) bombardment of grains on beam structures, flow of granular materials over surfaces covered by threads of hair in many biological processes, flow of grains through filters and strainers in various industrial segregation processes, and many others.
Extended Magnetohydrodynamics with Embedded Particle-in-Cell Simulation of Ganymede's Magnetosphere
NASA Technical Reports Server (NTRS)
Toth, Gabor; Jia, Xianzhe; Markidis, Stefano; Peng, Ivy Bo; Chen, Yuxi; Daldorff, Lars K. S.; Tenishev, Valeriy M.; Borovikov, Dmitry; Haiducek, John D.; Gombosi, Tamas I.;
2016-01-01
We have recently developed a new modeling capability to embed the implicit particle-in-cell (PIC) model iPIC3D into the Block-Adaptive-Tree-Solarwind-Roe-Upwind-Scheme magnetohydrodynamic (MHD) model. The MHD with embedded PIC domains (MHO-EPIC) algorithm Is a two-way coupled kinetic-fluid model. As one of the very first applications of the MHD-EPIC algorithm, we simulate the Interaction between Jupiter's magnetospherlc plasma and Ganymede's magnetosphere. We compare the MHO-EPIC simulations with pure Hall MHD simulations and compare both model results with Galileo observations to assess the Importance of kinetic effects In controlling the configuration and dynamics of Ganymede's magnetosphere. We find that the Hall MHD and MHO-EPIC solutions are qualitatively similar, but there are significant quantitative differences. In particular. the density and pressure inside the magnetosphere show different distributions. For our baseline grid resolution the PIC solution is more dynamic than the Hall MHD simulation and it compares significantly better with the Galileo magnetic measurements than the Hall MHD solution. The power spectra of the observed and simulated magnetic field fluctuations agree extremely well for the MHD-EPIC model. The MHO-EPIC simulation also produced a few flux transfer events (FTEs) that have magnetic signatures very similar to an observed event. The simulation shows that the FTEs often exhibit complex 3-0 structures with their orientations changing substantially between the equatorial plane and the Galileo trajectory, which explains the magnetic signatures observed during the magnetopause crossings. The computational cost of the MHO-EPIC simulation was only about 4 times more than that of the Hall MHD simulation.
Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2015-04-07
Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation time including both MC dose calculations and plan optimizations was reduced by a factor of 4.4, from 494 to 113 s, using only one GPU card.
Inversion of multiwavelength Raman lidar data for retrieval of bimodal aerosol size distribution
NASA Astrophysics Data System (ADS)
Veselovskii, Igor; Kolgotin, Alexei; Griaznov, Vadim; Müller, Detlef; Franke, Kathleen; Whiteman, David N.
2004-02-01
We report on the feasibility of deriving microphysical parameters of bimodal particle size distributions from Mie-Raman lidar based on a triple Nd:YAG laser. Such an instrument provides backscatter coefficients at 355, 532, and 1064 nm and extinction coefficients at 355 and 532 nm. The inversion method employed is Tikhonov's inversion with regularization. Special attention has been paid to extend the particle size range for which this inversion scheme works to ~10 μm, which makes this algorithm applicable to large particles, e.g., investigations concerning the hygroscopic growth of aerosols. Simulations showed that surface area, volume concentration, and effective radius are derived to an accuracy of ~50% for a variety of bimodal particle size distributions. For particle size distributions with an effective radius of <1 μm the real part of the complex refractive index was retrieved to an accuracy of +/-0.05, the imaginary part was retrieved to 50% uncertainty. Simulations dealing with a mode-dependent complex refractive index showed that an average complex refractive index is derived that lies between the values for the two individual modes. Thus it becomes possible to investigate external mixtures of particle size distributions, which, for example, might be present along continental rims along which anthropogenic pollution mixes with marine aerosols. Measurement cases obtained from the Institute for Tropospheric Research six-wavelength aerosol lidar observations during the Indian Ocean Experiment were used to test the capabilities of the algorithm for experimental data sets. A benchmark test was attempted for the case representing anthropogenic aerosols between a broken cloud deck. A strong contribution of particle volume in the coarse mode of the particle size distribution was found.
Inversion of multiwavelength Raman lidar data for retrieval of bimodal aerosol size distribution.
Veselovskii, Igor; Kolgotin, Alexei; Griaznov, Vadim; Müller, Detlef; Franke, Kathleen; Whiteman, David N
2004-02-10
We report on the feasibility of deriving microphysical parameters of bimodal particle size distributions from Mie-Raman lidar based on a triple Nd:YAG laser. Such an instrument provides backscatter coefficients at 355, 532, and 1064 nm and extinction coefficients at 355 and 532 nm. The inversion method employed is Tikhonov's inversion with regularization. Special attention has been paid to extend the particle size range for which this inversion scheme works to approximately 10 microm, which makes this algorithm applicable to large particles, e.g., investigations concerning the hygroscopic growth of aerosols. Simulations showed that surface area, volume concentration, and effective radius are derived to an accuracy of approximately 50% for a variety of bimodal particle size distributions. For particle size distributions with an effective radius of < 1 microm the real part of the complex refractive index was retrieved to an accuracy of +/- 0.05, the imaginary part was retrieved to 50% uncertainty. Simulations dealing with a mode-dependent complex refractive index showed that an average complex refractive index is derived that lies between the values for the two individual modes. Thus it becomes possible to investigate external mixtures of particle size distributions, which, for example, might be present along continental rims along which anthropogenic pollution mixes with marine aerosols. Measurement cases obtained from the Institute for Tropospheric Research six-wavelength aerosol lidar observations during the Indian Ocean Experiment were used to test the capabilities of the algorithm for experimental data sets. A benchmark test was attempted for the case representing anthropogenic aerosols between a broken cloud deck. A strong contribution of particle volume in the coarse mode of the particle size distribution was found.
A Lagrangian particle method with remeshing for tracer transport on the sphere
Bosler, Peter Andrew; Kent, James; Krasny, Robert; ...
2017-03-30
A Lagrangian particle method (called LPM) based on the flow map is presented for tracer transport on the sphere. The particles carry tracer values and are located at the centers and vertices of triangular Lagrangian panels. Remeshing is applied to control particle disorder and two schemes are compared, one using direct tracer interpolation and another using inverse flow map interpolation with sampling of the initial tracer density. Test cases include a moving-vortices flow and reversing-deformational flow with both zero and nonzero divergence, as well as smooth and discontinuous tracers. We examine the accuracy of the computed tracer density and tracermore » integral, and preservation of nonlinear correlation in a pair of tracers. Here, we compare results obtained using LPM and the Lin–Rood finite-volume scheme. An adaptive particle/panel refinement scheme is demonstrated.« less
A Lagrangian particle method with remeshing for tracer transport on the sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosler, Peter Andrew; Kent, James; Krasny, Robert
A Lagrangian particle method (called LPM) based on the flow map is presented for tracer transport on the sphere. The particles carry tracer values and are located at the centers and vertices of triangular Lagrangian panels. Remeshing is applied to control particle disorder and two schemes are compared, one using direct tracer interpolation and another using inverse flow map interpolation with sampling of the initial tracer density. Test cases include a moving-vortices flow and reversing-deformational flow with both zero and nonzero divergence, as well as smooth and discontinuous tracers. We examine the accuracy of the computed tracer density and tracermore » integral, and preservation of nonlinear correlation in a pair of tracers. Here, we compare results obtained using LPM and the Lin–Rood finite-volume scheme. An adaptive particle/panel refinement scheme is demonstrated.« less
Fiore, Andrew M; Swan, James W
2018-01-28
Brownian Dynamics simulations are an important tool for modeling the dynamics of soft matter. However, accurate and rapid computations of the hydrodynamic interactions between suspended, microscopic components in a soft material are a significant computational challenge. Here, we present a new method for Brownian dynamics simulations of suspended colloidal scale particles such as colloids, polymers, surfactants, and proteins subject to a particular and important class of hydrodynamic constraints. The total computational cost of the algorithm is practically linear with the number of particles modeled and can be further optimized when the characteristic mass fractal dimension of the suspended particles is known. Specifically, we consider the so-called "stresslet" constraint for which suspended particles resist local deformation. This acts to produce a symmetric force dipole in the fluid and imparts rigidity to the particles. The presented method is an extension of the recently reported positively split formulation for Ewald summation of the Rotne-Prager-Yamakawa mobility tensor to higher order terms in the hydrodynamic scattering series accounting for force dipoles [A. M. Fiore et al., J. Chem. Phys. 146(12), 124116 (2017)]. The hydrodynamic mobility tensor, which is proportional to the covariance of particle Brownian displacements, is constructed as an Ewald sum in a novel way which guarantees that the real-space and wave-space contributions to the sum are independently symmetric and positive-definite for all possible particle configurations. This property of the Ewald sum is leveraged to rapidly sample the Brownian displacements from a superposition of statistically independent processes with the wave-space and real-space contributions as respective covariances. The cost of computing the Brownian displacements in this way is comparable to the cost of computing the deterministic displacements. The addition of a stresslet constraint to the over-damped particle equations of motion leads to a stochastic differential algebraic equation (SDAE) of index 1, which is integrated forward in time using a mid-point integration scheme that implicitly produces stochastic displacements consistent with the fluctuation-dissipation theorem for the constrained system. Calculations for hard sphere dispersions are illustrated and used to explore the performance of the algorithm. An open source, high-performance implementation on graphics processing units capable of dynamic simulations of millions of particles and integrated with the software package HOOMD-blue is used for benchmarking and made freely available in the supplementary material.
NASA Astrophysics Data System (ADS)
Clementi, N. C.; Revelli, J. A.; Sibona, G. J.
2015-07-01
We propose a general nonlinear analytical framework to study the effect of an external stimulus in the internal state of a population of moving particles. This novel scheme allows us to study a broad range of excitation transport phenomena. In particular, considering social systems, it gives insight of the spatial dynamics influence in the competition between propaganda (mass media) and convincement. By extending the framework presented by Terranova et al. [Europhys. Lett. 105, 30007 (2014), 10.1209/0295-5075/105/30007], we now allow changes in individual's opinions due to a reflection induced by mass media. The equations of the model could be solved numerically, and, for some special cases, it is possible to derive analytical solutions for the steady states. We implement computational simulations for different social and dynamical systems to check the accuracy of our scheme and to study a broader variety of scenarios. In particular, we compare the numerical outcome with the analytical results for two possible real cases, finding a good agreement. From the results, we observe that mass media dominates the opinion state in slow dynamics communities; whereas, for higher agent active speeds, the rate of interactions increases and the opinion state is determined by a competition between propaganda and persuasion. This difference suggests that kinetics can not be neglected in the study of transport of any excitation over a particle system.
Clementi, N C; Revelli, J A; Sibona, G J
2015-07-01
We propose a general nonlinear analytical framework to study the effect of an external stimulus in the internal state of a population of moving particles. This novel scheme allows us to study a broad range of excitation transport phenomena. In particular, considering social systems, it gives insight of the spatial dynamics influence in the competition between propaganda (mass media) and convincement. By extending the framework presented by Terranova et al. [Europhys. Lett. 105, 30007 (2014)], we now allow changes in individual's opinions due to a reflection induced by mass media. The equations of the model could be solved numerically, and, for some special cases, it is possible to derive analytical solutions for the steady states. We implement computational simulations for different social and dynamical systems to check the accuracy of our scheme and to study a broader variety of scenarios. In particular, we compare the numerical outcome with the analytical results for two possible real cases, finding a good agreement. From the results, we observe that mass media dominates the opinion state in slow dynamics communities; whereas, for higher agent active speeds, the rate of interactions increases and the opinion state is determined by a competition between propaganda and persuasion. This difference suggests that kinetics can not be neglected in the study of transport of any excitation over a particle system.
Lung cancer risk of airborne particles for Italian population
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buonanno, G., E-mail: buonanno@unicas.it; International Laboratory for Air Quality and Health, Queensland University of Technology, 2 George Street 2, 4001 Brisbane, Qld.; Giovinco, G., E-mail: giovinco@unicas.it
Airborne particles, including both ultrafine and supermicrometric particles, contain various carcinogens. Exposure and risk-assessment studies regularly use particle mass concentration as dosimetry parameter, therefore neglecting the potential impact of ultrafine particles due to their negligible mass compared to supermicrometric particles. The main purpose of this study was the characterization of lung cancer risk due to exposure to polycyclic aromatic hydrocarbons and some heavy metals associated with particle inhalation by Italian non-smoking people. A risk-assessment scheme, modified from an existing risk model, was applied to estimate the cancer risk contribution from both ultrafine and supermicrometric particles. Exposure assessment was carried outmore » on the basis of particle number distributions measured in 25 smoke-free microenvironments in Italy. The predicted lung cancer risk was then compared to the cancer incidence rate in Italy to assess the number of lung cancer cases attributed to airborne particle inhalation, which represents one of the main causes of lung cancer, apart from smoking. Ultrafine particles are associated with a much higher risk than supermicrometric particles, and the modified risk-assessment scheme provided a more accurate estimate than the conventional scheme. Great attention has to be paid to indoor microenvironments and, in particular, to cooking and eating times, which represent the major contributors to lung cancer incidence in the Italian population. The modified risk assessment scheme can serve as a tool for assessing environmental quality, as well as setting up exposure standards for particulate matter. - Highlights: • Lung cancer risk for non-smoking Italian population due to particle inhalation. • The average lung cancer risk for Italian population is equal to 1.90×10{sup −2}. • Ultrafine particle is the aerosol metric mostly contributing to lung cancer risk. • B(a)P is the main (particle-bounded) compound contributing to lung cancer risk. • Cooking activities represent the principal contributor to the lung cancer risk.« less
NASA Astrophysics Data System (ADS)
Na, Dong-Yeop; Omelchenko, Yuri A.; Moon, Haksu; Borges, Ben-Hur V.; Teixeira, Fernando L.
2017-10-01
We present a charge-conservative electromagnetic particle-in-cell (EM-PIC) algorithm optimized for the analysis of vacuum electronic devices (VEDs) with cylindrical symmetry (axisymmetry). We exploit the axisymmetry present in the device geometry, fields, and sources to reduce the dimensionality of the problem from 3D to 2D. Further, we employ 'transformation optics' principles to map the original problem in polar coordinates with metric tensor diag (1 ,ρ2 , 1) to an equivalent problem on a Cartesian metric tensor diag (1 , 1 , 1) with an effective (artificial) inhomogeneous medium introduced. The resulting problem in the meridian (ρz) plane is discretized using an unstructured 2D mesh considering TEϕ-polarized fields. Electromagnetic field and source (node-based charges and edge-based currents) variables are expressed as differential forms of various degrees, and discretized using Whitney forms. Using leapfrog time integration, we obtain a mixed E - B finite-element time-domain scheme for the full-discrete Maxwell's equations. We achieve a local and explicit time update for the field equations by employing the sparse approximate inverse (SPAI) algorithm. Interpolating field values to particles' positions for solving Newton-Lorentz equations of motion is also done via Whitney forms. Particles are advanced using the Boris algorithm with relativistic correction. A recently introduced charge-conserving scatter scheme tailored for 2D unstructured grids is used in the scatter step. The algorithm is validated considering cylindrical cavity and space-charge-limited cylindrical diode problems. We use the algorithm to investigate the physical performance of VEDs designed to harness particle bunching effects arising from the coherent (resonance) Cerenkov electron beam interactions within micro-machined slow wave structures.
NASA Astrophysics Data System (ADS)
Salimun, Ester; Tangang, Fredolin; Juneng, Liew
2010-06-01
A comparative study has been conducted to investigate the skill of four convection parameterization schemes, namely the Anthes-Kuo (AK), the Betts-Miller (BM), the Kain-Fritsch (KF), and the Grell (GR) schemes in the numerical simulation of an extreme precipitation episode over eastern Peninsular Malaysia using the Pennsylvania State University—National Center for Atmospheric Research Center (PSU-NCAR) Fifth Generation Mesoscale Model (MM5). The event is a commonly occurring westward propagating tropical depression weather system during a boreal winter resulting from an interaction between a cold surge and the quasi-stationary Borneo vortex. The model setup and other physical parameterizations are identical in all experiments and hence any difference in the simulation performance could be associated with the cumulus parameterization scheme used. From the predicted rainfall and structure of the storm, it is clear that the BM scheme has an edge over the other schemes. The rainfall intensity and spatial distribution were reasonably well simulated compared to observations. The BM scheme was also better in resolving the horizontal and vertical structures of the storm. Most of the rainfall simulated by the BM simulation was of the convective type. The failure of other schemes (AK, GR and KF) in simulating the event may be attributed to the trigger function, closure assumption, and precipitation scheme. On the other hand, the appropriateness of the BM scheme for this episode may not be generalized for other episodes or convective environments.
Counterfactual quantum-information transfer without transmitting any physical particles
NASA Astrophysics Data System (ADS)
Guo, Qi; Cheng, Liu-Yong; Chen, Li; Wang, Hong-Fu; Zhang, Shou
2015-02-01
We demonstrate quantum information can be transferred between two distant participants without any physical particles traveling between them. The key procedure of the counterfactual scheme is to entangle two nonlocal qubits with each other without interaction, so the scheme can also be used to generate nonlocal entanglement counterfactually. We here illustrate the scheme by using flying photon qubits and Rydberg atom qubits assisted by a mesoscopic atomic ensemble. Unlike the typical teleportation, the present scheme can transport an unknown qubit in a nondeterministic manner without prior entanglement sharing or classical communication between the two distant participants.
Counterfactual quantum-information transfer without transmitting any physical particles.
Guo, Qi; Cheng, Liu-Yong; Chen, Li; Wang, Hong-Fu; Zhang, Shou
2015-02-12
We demonstrate quantum information can be transferred between two distant participants without any physical particles traveling between them. The key procedure of the counterfactual scheme is to entangle two nonlocal qubits with each other without interaction, so the scheme can also be used to generate nonlocal entanglement counterfactually. We here illustrate the scheme by using flying photon qubits and Rydberg atom qubits assisted by a mesoscopic atomic ensemble. Unlike the typical teleportation, the present scheme can transport an unknown qubit in a nondeterministic manner without prior entanglement sharing or classical communication between the two distant participants.
NASA Astrophysics Data System (ADS)
Xie, Hang; Jiang, Feng; Tian, Heng; Zheng, Xiao; Kwok, Yanho; Chen, Shuguang; Yam, ChiYung; Yan, YiJing; Chen, Guanhua
2012-07-01
Basing on our hierarchical equations of motion for time-dependent quantum transport [X. Zheng, G. H. Chen, Y. Mo, S. K. Koo, H. Tian, C. Y. Yam, and Y. J. Yan, J. Chem. Phys. 133, 114101 (2010), 10.1063/1.3475566], we develop an efficient and accurate numerical algorithm to solve the Liouville-von-Neumann equation. We solve the real-time evolution of the reduced single-electron density matrix at the tight-binding level. Calculations are carried out to simulate the transient current through a linear chain of atoms, with each represented by a single orbital. The self-energy matrix is expanded in terms of multiple Lorentzian functions, and the Fermi distribution function is evaluated via the Padè spectrum decomposition. This Lorentzian-Padè decomposition scheme is employed to simulate the transient current. With sufficient Lorentzian functions used to fit the self-energy matrices, we show that the lead spectral function and the dynamics response can be treated accurately. Compared to the conventional master equation approaches, our method is much more efficient as the computational time scales cubically with the system size and linearly with the simulation time. As a result, the simulations of the transient currents through systems containing up to one hundred of atoms have been carried out. As density functional theory is also an effective one-particle theory, the Lorentzian-Padè decomposition scheme developed here can be generalized for first-principles simulation of realistic systems.
Goyon, C; Depierreux, S; Yahia, V; Loisel, G; Baccou, C; Courvoisier, C; Borisenko, N G; Orekhov, A; Rosmej, O; Labaune, C
2013-12-06
An experimental program was designed to study the most important issues of laser-plasma interaction physics in the context of the shock ignition scheme. In the new experiments presented in this Letter, a combination of kilojoule and short laser pulses was used to study the laser-plasma coupling at high laser intensities for a large range of electron densities and plasma profiles. We find that the backscatter is dominated by stimulated Brillouin scattering with stimulated Raman scattering staying at a limited level. This is in agreement with past experiments using long pulses but laser intensities limited to 2×10(15) W/cm2, or short pulses with intensities up to 5×10(16) W/cm2 as well as with 2D particle-in-cell simulations.
Concepts for a Muon Accelerator Front-End
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stratakis, Diktys; Berg, Scott; Neuffer, David
2017-03-16
We present a muon capture front-end scheme for muon based applications. In this Front-End design, a proton bunch strikes a target and creates secondary pions that drift into a capture channel, decaying into muons. A series of rf cavities forms the resulting muon beams into a series of bunches of differerent energies, aligns the bunches to equal central energies, and initiates ionization cooling. We also discuss the design of a chicane system for the removal of unwanted secondary particles from the muon capture region and thus reduce activation of the machine. With the aid of numerical simulations we evaluate themore » performance of this Front-End scheme as well as study its sensitivity against key parameters such as the type of target, the number of rf cavities and the gas pressure of the channel.« less
Enhanced hole boring with two-color relativistic laser pulses in the fast ignition scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Changhai; Tian, Ye; Li, Wentao
A scheme of using two-color laser pulses for hole boring into overdense plasma as well as energy transfer into electron and ion beams has been studied using particle-in-cell simulations. Following an ultra-short ultra-intense hole-boring laser pulse with a short central wavelength in extreme ultra-violet range, the main infrared driving laser pulse can be guided in the hollow channel preformed by the former laser and propagate much deeper into an overdense plasma, as compared to the case using the infrared laser only. In addition to efficiently transferring the main driving laser energy into energetic electrons and ions generation deep inside themore » overdense plasma, the ion beam divergence can be greatly reduced. The results might be beneficial for the fast ignition concept of inertial confinement fusion.« less
Uniform laser-driven relativistic electron layer for coherent Thomson scattering.
Wu, H-C; Meyer-ter-Vehn, J; Fernández, J; Hegelich, B M
2010-06-11
A novel scheme is proposed to generate uniform relativistic electron layers for coherent Thomson backscattering. A few-cycle laser pulse is used to produce the electron layer from an ultrathin solid foil. The key element of the new scheme is an additional foil that reflects the drive-laser pulse, but lets the electrons pass almost unperturbed. Making use of two-dimensional particle-in-cell simulations and well-known basic theory, it is shown that the electrons, after interacting with both the drive and reflected laser pulses, form a very uniform flyer freely cruising with a high relativistic γ factor exactly in the drive-laser direction (no transverse momentum). It backscatters the probe light with a full Doppler shift factor of 4γ(2). The reflectivity and its decay due to layer expansion are discussed.
Measuring h /mCs and the Fine Structure Constant with Bragg Diffraction and Bloch Oscillations
NASA Astrophysics Data System (ADS)
Parker, Richard
2016-05-01
We have demonstrated a new scheme for atom interferometry based on large-momentum-transfer Bragg beam splitters and Bloch oscillations. In this new scheme, we have achieved a resolution of δα / α =0.25ppb in the fine structure constant measurement, which gives up to 4.4 million radians of phase difference between freely evolving matter waves. We suppress many systematic effects, e.g., Zeeman shifts and effects from Earth's gravity and vibrations, use Bloch oscillations to increase the signal and reduce the diffraction phase, simulate multi-atom Bragg diffraction to understand sub-ppb systematic effects, and implement spatial filtering to further suppress systematic effects. We present our recent progress toward a measurement of the fine structure constant, which will provide a stringent test of the standard model of particle physics.
A Bulk Microphysics Parameterization with Multiple Ice Precipitation Categories.
NASA Astrophysics Data System (ADS)
Straka, Jerry M.; Mansell, Edward R.
2005-04-01
A single-moment bulk microphysics scheme with multiple ice precipitation categories is described. It has 2 liquid hydrometeor categories (cloud droplets and rain) and 10 ice categories that are characterized by habit, size, and density—two ice crystal habits (column and plate), rimed cloud ice, snow (ice crystal aggregates), three categories of graupel with different densities and intercepts, frozen drops, small hail, and large hail. The concept of riming history is implemented for conversions among the graupel and frozen drops categories. The multiple precipitation ice categories allow a range of particle densities and fall velocities for simulating a variety of convective storms with minimal parameter tuning. The scheme is applied to two cases—an idealized continental multicell storm that demonstrates the ice precipitation process, and a small Florida maritime storm in which the warm rain process is important.
SciDAC Center for Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Zhihong
2013-12-18
During the first year of the SciDAC gyrokinetic particle simulation (GPS) project, the GPS team (Zhihong Lin, Liu Chen, Yasutaro Nishimura, and Igor Holod) at the University of California, Irvine (UCI) studied the tokamak electron transport driven by electron temperature gradient (ETG) turbulence, and by trapped electron mode (TEM) turbulence and ion temperature gradient (ITG) turbulence with kinetic electron effects, extended our studies of ITG turbulence spreading to core-edge coupling. We have developed and optimized an elliptic solver using finite element method (FEM), which enables the implementation of advanced kinetic electron models (split-weight scheme and hybrid model) in the SciDACmore » GPS production code GTC. The GTC code has been ported and optimized on both scalar and vector parallel computer architectures, and is being transformed into objected-oriented style to facilitate collaborative code development. During this period, the UCI team members presented 11 invited talks at major national and international conferences, published 22 papers in peer-reviewed journals and 10 papers in conference proceedings. The UCI hosted the annual SciDAC Workshop on Plasma Turbulence sponsored by the GPS Center, 2005-2007. The workshop was attended by about fifties US and foreign researchers and financially sponsored several gradual students from MIT, Princeton University, Germany, Switzerland, and Finland. A new SciDAC postdoc, Igor Holod, has arrived at UCI to initiate global particle simulation of magnetohydrodynamics turbulence driven by energetic particle modes. The PI, Z. Lin, has been promoted to the Associate Professor with tenure at UCI.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagos, Samson M.; Feng, Zhe; Burleyson, Casey D.
Regional cloud permitting model simulations of cloud populations observed during the 2011 ARM Madden Julian Oscillation Investigation Experiment/ Dynamics of Madden-Julian Experiment (AMIE/DYNAMO) field campaign are evaluated against radar and ship-based measurements. Sensitivity of model simulated surface rain rate statistics to parameters and parameterization of hydrometeor sizes in five commonly used WRF microphysics schemes are examined. It is shown that at 2 km grid spacing, the model generally overestimates rain rate from large and deep convective cores. Sensitivity runs involving variation of parameters that affect rain drop or ice particle size distribution (more aggressive break-up process etc) generally reduce themore » bias in rain-rate and boundary layer temperature statistics as the smaller particles become more vulnerable to evaporation. Furthermore significant improvement in the convective rain-rate statistics is observed when the horizontal grid-spacing is reduced to 1 km and 0.5 km, while it is worsened when run at 4 km grid spacing as increased turbulence enhances evaporation. The results suggest modulation of evaporation processes, through parameterization of turbulent mixing and break-up of hydrometeors may provide a potential avenue for correcting cloud statistics and associated boundary layer temperature biases in regional and global cloud permitting model simulations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kartavykh, Y. Y.; Dröge, W.; Gedalin, M.
2016-03-20
We use numerical solutions of the focused transport equation obtained by an implicit stochastic differential equation scheme to study the evolution of the pitch-angle dependent distribution function of protons in the vicinity of shock waves. For a planar stationary parallel shock, the effects of anisotropic distribution functions, pitch-angle dependent spatial diffusion, and first-order Fermi acceleration at the shock are examined, including the timescales on which the energy spectrum approaches the predictions of diffusive shock acceleration theory. We then consider the case that a flare-accelerated population of ions is released close to the Sun simultaneously with a traveling interplanetary shock formore » which we assume a simplified geometry. We investigate the consequences of adiabatic focusing in the diverging magnetic field on the particle transport at the shock, and of the competing effects of acceleration at the shock and adiabatic energy losses in the expanding solar wind. We analyze the resulting intensities, anisotropies, and energy spectra as a function of time and find that our simulations can naturally reproduce the morphologies of so-called mixed particle events in which sometimes the prompt and sometimes the shock component is more prominent, by assuming parameter values which are typically observed for scattering mean free paths of ions in the inner heliosphere and energy spectra of the flare particles which are injected simultaneously with the release of the shock.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Corato, M., E-mail: marco.decorato@unina.it; Slot, J.J.M., E-mail: j.j.m.slot@tue.nl; Hütter, M., E-mail: m.huetter@tue.nl
In this paper, we present a finite element implementation of fluctuating hydrodynamics with a moving boundary fitted mesh for treating the suspended particles. The thermal fluctuations are incorporated into the continuum equations using the Landau and Lifshitz approach [1]. The proposed implementation fulfills the fluctuation–dissipation theorem exactly at the discrete level. Since we restrict the equations to the creeping flow case, this takes the form of a relation between the diffusion coefficient matrix and friction matrix both at the particle and nodal level of the finite elements. Brownian motion of arbitrarily shaped particles in complex confinements can be considered withinmore » the present formulation. A multi-step time integration scheme is developed to correctly capture the drift term required in the stochastic differential equation (SDE) describing the evolution of the positions of the particles. The proposed approach is validated by simulating the Brownian motion of a sphere between two parallel plates and the motion of a spherical particle in a cylindrical cavity. The time integration algorithm and the fluctuating hydrodynamics implementation are then applied to study the diffusion and the equilibrium probability distribution of a confined circle under an external harmonic potential.« less
Das, Raibatak; Cairo, Christopher W.; Coombs, Daniel
2009-01-01
The extraction of hidden information from complex trajectories is a continuing problem in single-particle and single-molecule experiments. Particle trajectories are the result of multiple phenomena, and new methods for revealing changes in molecular processes are needed. We have developed a practical technique that is capable of identifying multiple states of diffusion within experimental trajectories. We model single particle tracks for a membrane-associated protein interacting with a homogeneously distributed binding partner and show that, with certain simplifying assumptions, particle trajectories can be regarded as the outcome of a two-state hidden Markov model. Using simulated trajectories, we demonstrate that this model can be used to identify the key biophysical parameters for such a system, namely the diffusion coefficients of the underlying states, and the rates of transition between them. We use a stochastic optimization scheme to compute maximum likelihood estimates of these parameters. We have applied this analysis to single-particle trajectories of the integrin receptor lymphocyte function-associated antigen-1 (LFA-1) on live T cells. Our analysis reveals that the diffusion of LFA-1 is indeed approximately two-state, and is characterized by large changes in cytoskeletal interactions upon cellular activation. PMID:19893741
Tuan, Pham Viet; Koo, Insoo
2017-10-06
In this paper, we consider multiuser simultaneous wireless information and power transfer (SWIPT) for cognitive radio systems where a secondary transmitter (ST) with an antenna array provides information and energy to multiple single-antenna secondary receivers (SRs) equipped with a power splitting (PS) receiving scheme when multiple primary users (PUs) exist. The main objective of the paper is to maximize weighted sum harvested energy for SRs while satisfying their minimum required signal-to-interference-plus-noise ratio (SINR), the limited transmission power at the ST, and the interference threshold of each PU. For the perfect channel state information (CSI), the optimal beamforming vectors and PS ratios are achieved by the proposed PSO-SDR in which semidefinite relaxation (SDR) and particle swarm optimization (PSO) methods are jointly combined. We prove that SDR always has a rank-1 solution, and is indeed tight. For the imperfect CSI with bounded channel vector errors, the upper bound of weighted sum harvested energy (WSHE) is also obtained through the S-Procedure. Finally, simulation results demonstrate that the proposed PSO-SDR has fast convergence and better performance as compared to the other baseline schemes.
Li, Rui; Dong, Xue; Guo, Jingchao; Fu, Yunfei; Zhao, Chun; Wang, Yu; Min, Qilong
2017-10-23
Mineral dust is the most important natural source of atmospheric ice nuclei (IN) which may significantly mediate the properties of ice cloud through heterogeneous nucleation and lead to crucial impacts on hydrological and energy cycle. The potential dust IN effect on cloud top temperature (CTT) in a well-developed mesoscale convective system (MCS) was studied using both satellite observations and cloud resolving model (CRM) simulations. We combined satellite observations from passive spectrometer, active cloud radar, lidar, and wind field simulations from CRM to identify the place where ice cloud mixed with dust particles. For given ice water path, the CTT of dust-mixed cloud is warmer than that in relatively pristine cloud. The probability distribution function (PDF) of CTT for dust-mixed clouds shifted to the warmer end and showed two peaks at about -45 °C and -25 °C. The PDF for relatively pristine cloud only show one peak at -55 °C. Cloud simulations with different microphysical schemes agreed well with each other and showed better agreement with satellite observations in pristine clouds, but they showed large discrepancies in dust-mixed clouds. Some microphysical schemes failed to predict the warm peak of CTT related to heterogeneous ice formation.
N-body simulations of star clusters
NASA Astrophysics Data System (ADS)
Engle, Kimberly Anne
1999-10-01
We investigate the structure and evolution of underfilling (i.e. non-Roche-lobe-filling) King model globular star clusters using N-body simulations. We model clusters with various underfilling factors and mass distributions to determine their evolutionary tracks and lifetimes. These models include a self-consistent galactic tidal field, mass loss due to stellar evolution, ejection, and evaporation, and binary evolution. We find that a star cluster that initially does not fill its Roche lobe can live many times longer than one that does initially fill its Roche lobe. After a few relaxation times, the cluster expands to fill its Roche lobe. We also find that the choice of initial mass function significantly affects the lifetime of the cluster. These simulations were performed on the GRAPE-4 (GRAvity PipE) special-purpose hardware with the stellar dynamics package ``Starlab.'' The GRAPE-4 system is a massively-parallel computer designed to calculate the force (and its first time derivative) due to N particles. Starlab's integrator ``kira'' employs a 4th- order Hermite scheme with hierarchical (block) time steps to evolve the stellar system. We discuss, in some detail, the design of the GRAPE-4 system and the manner in which the Hermite integration scheme with block time steps is implemented in the hardware.
NASA Astrophysics Data System (ADS)
Armand J, K. M.
2017-12-01
In this study, version 4 of the regional climate model (RegCM4) is used to perform 6 years simulation including one year for spin-up (from January 2001 to December 2006) over Central Africa using four convective schemes: The Emmanuel scheme (MIT), the Grell scheme with Arakawa-Schulbert closure assumption (GAS), the Grell scheme with Fritsch-Chappell closure assumption (GFC) and the Anthes-Kuo scheme (Kuo). We have investigated the ability of the model to simulate precipitation, surface temperature, wind and aerosols optical depth. Emphasis in the model results were made in December-January-February (DJF) and July-August-September (JAS) periods. Two subregions have been identified for more specific analysis namely: zone 1 which corresponds to the sahel region mainly classified as desert and steppe and zone 2 which is a region spanning the tropical rain forest and is characterised by a bimodal rain regime. We found that regardless of periods or simulated parameters, MIT scheme generally has a tendency to overestimate. The GAS scheme is more suitable in simulating the aforementioned parameters, as well as the diurnal cycle of precipitations everywhere over the study domain irrespective of the season. In JAS, model results are similar in the representation of regional wind circulation. Apart from the MIT scheme, all the convective schemes give the same trends in aerosols optical depth simulations. Additional experiment reveals that the use of BATS instead of Zeng scheme to calculate ocean flux appears to improve the quality of the model simulations.
Toward Hamiltonian Adaptive QM/MM: Accurate Solvent Structures Using Many-Body Potentials.
Boereboom, Jelle M; Potestio, Raffaello; Donadio, Davide; Bulo, Rosa E
2016-08-09
Adaptive quantum mechanical (QM)/molecular mechanical (MM) methods enable efficient molecular simulations of chemistry in solution. Reactive subregions are modeled with an accurate QM potential energy expression while the rest of the system is described in a more approximate manner (MM). As solvent molecules diffuse in and out of the reactive region, they are gradually included into (and excluded from) the QM expression. It would be desirable to model such a system with a single adaptive Hamiltonian, but thus far this has resulted in distorted structures at the boundary between the two regions. Solving this long outstanding problem will allow microcanonical adaptive QM/MM simulations that can be used to obtain vibrational spectra and dynamical properties. The difficulty lies in the complex QM potential energy expression, with a many-body expansion that contains higher order terms. Here, we outline a Hamiltonian adaptive multiscale scheme within the framework of many-body potentials. The adaptive expressions are entirely general, and complementary to all standard (nonadaptive) QM/MM embedding schemes available. We demonstrate the merit of our approach on a molecular system defined by two different MM potentials (MM/MM'). For the long-range interactions a numerical scheme is used (particle mesh Ewald), which yields energy expressions that are many-body in nature. Our Hamiltonian approach is the first to provide both energy conservation and the correct solvent structure everywhere in this system.
Dosanjh, Manjit; Cirilli, Manuela; Navin, Sparsh
2015-01-01
Between 2011 and 2015, the ENTERVISION Marie Curie Initial Training Network has been training 15 young researchers from a variety of backgrounds on topics ranging from in-beam Positron Emission Tomography or Single Particle Tomography techniques, to adaptive treatment planning, optical imaging, Monte Carlo simulations and biological phantom design. This article covers the main research activities, as well as the training scheme implemented by the participating institutes, which included academia, research, and industry. PMID:26697403
Dynamic downscaling over western Himalayas: Impact of cloud microphysics schemes
NASA Astrophysics Data System (ADS)
Tiwari, Sarita; Kar, Sarat C.; Bhatla, R.
2018-03-01
Due to lack of observation data in the region of inhomogeneous terrain of the Himalayas, detailed climate of Himalayas is still unknown. Global reanalysis data are too coarse to represent the hydroclimate over the region with sharp orography gradient in the western Himalayas. In the present study, dynamic downscaling of the European Centre for Medium-Range Weather Forecast (ECMWF) Reanalysis-Interim (ERA-I) dataset over the western Himalayas using high-resolution Weather Research and Forecast (WRF) model has been carried out. Sensitivity studies have also been carried out using convection and microphysics parameterization schemes. The WRF model simulations have been compared against ERA-I and available station observations. Analysis of the results suggests that the WRF model has simulated the hydroclimate of the region well. It is found that in the simulations that the impact of convection scheme is more during summer months than in winter. Examination of simulated results using various microphysics schemes reveal that the WRF single-moment class-6 (WSM6) scheme simulates more precipitation on the upwind region of the high mountain than that in the Morrison and Thompson schemes during the winter period. Vertical distribution of various hydrometeors shows that there are large differences in mixing ratios of ice, snow and graupel in the simulations with different microphysics schemes. The ice mixing ratio in Morrison scheme is more than WSM6 above 400 hPa. The Thompson scheme favors formation of more snow than WSM6 or Morrison schemes while the Morrison scheme has more graupel formation than other schemes.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-07
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
NASA Astrophysics Data System (ADS)
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2016-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456
Towards information-optimal simulation of partial differential equations.
Leike, Reimar H; Enßlin, Torsten A
2018-03-01
Most simulation schemes for partial differential equations (PDEs) focus on minimizing a simple error norm of a discretized version of a field. This paper takes a fundamentally different approach; the discretized field is interpreted as data providing information about a real physical field that is unknown. This information is sought to be conserved by the scheme as the field evolves in time. Such an information theoretic approach to simulation was pursued before by information field dynamics (IFD). In this paper we work out the theory of IFD for nonlinear PDEs in a noiseless Gaussian approximation. The result is an action that can be minimized to obtain an information-optimal simulation scheme. It can be brought into a closed form using field operators to calculate the appearing Gaussian integrals. The resulting simulation schemes are tested numerically in two instances for the Burgers equation. Their accuracy surpasses finite-difference schemes on the same resolution. The IFD scheme, however, has to be correctly informed on the subgrid correlation structure. In certain limiting cases we recover well-known simulation schemes like spectral Fourier-Galerkin methods. We discuss implications of the approximations made.
Toon, Owen B.; Bardeen, Charles G.; Mills, Michael J.; Fan, Tianyi; English, Jason M.; Neely, Ryan R.
2015-01-01
Abstract A sectional aerosol model (CARMA) has been developed and coupled with the Community Earth System Model (CESM1). Aerosol microphysics, radiative properties, and interactions with clouds are simulated in the size‐resolving model. The model described here uses 20 particle size bins for each aerosol component including freshly nucleated sulfate particles, as well as mixed particles containing sulfate, primary organics, black carbon, dust, and sea salt. The model also includes five types of bulk secondary organic aerosols with four volatility bins. The overall cost of CESM1‐CARMA is approximately ∼2.6 times as much computer time as the standard three‐mode aerosol model in CESM1 (CESM1‐MAM3) and twice as much computer time as the seven‐mode aerosol model in CESM1 (CESM1‐MAM7) using similar gas phase chemistry codes. Aerosol spatial‐temporal distributions are simulated and compared with a large set of observations from satellites, ground‐based measurements, and airborne field campaigns. Simulated annual average aerosol optical depths are lower than MODIS/MISR satellite observations and AERONET observations by ∼32%. This difference is within the uncertainty of the satellite observations. CESM1/CARMA reproduces sulfate aerosol mass within 8%, organic aerosol mass within 20%, and black carbon aerosol mass within 50% compared with a multiyear average of the IMPROVE/EPA data over United States, but differences vary considerably at individual locations. Other data sets show similar levels of comparison with model simulations. The model suggests that in addition to sulfate, organic aerosols also significantly contribute to aerosol mass in the tropical UTLS, which is consistent with limited data. PMID:27668039
Sampling the isothermal-isobaric ensemble by Langevin dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Xingyu; Institute of Applied Physics and Computational Mathematics, Fenghao East Road 2, Beijing 100094; CAEP Software Center for High Performance Numerical Simulation, Huayuan Road 6, Beijing 100088
2016-03-28
We present a new method of conducting fully flexible-cell molecular dynamics simulation in isothermal-isobaric ensemble based on Langevin equations of motion. The stochastic coupling to all particle and cell degrees of freedoms is introduced in a correct way, in the sense that the stationary configurational distribution is proved to be consistent with that of the isothermal-isobaric ensemble. In order to apply the proposed method in computer simulations, a second order symmetric numerical integration scheme is developed by Trotter’s splitting of the single-step propagator. Moreover, a practical guide of choosing working parameters is suggested for user specified thermo- and baro-coupling timemore » scales. The method and software implementation are carefully validated by a numerical example.« less
Transient Plasma Photonic Crystals for High-Power Lasers.
Lehmann, G; Spatschek, K H
2016-06-03
A new type of transient photonic crystals for high-power lasers is presented. The crystal is produced by counterpropagating laser beams in plasma. Trapped electrons and electrically forced ions generate a strong density grating. The lifetime of the transient photonic crystal is determined by the ballistic motion of ions. The robustness of the photonic crystal allows one to manipulate high-intensity laser pulses. The scheme of the crystal is analyzed here by 1D Vlasov simulations. Reflection or transmission of high-power laser pulses are predicted by particle-in-cell simulations. It is shown that a transient plasma photonic crystal may act as a tunable mirror for intense laser pulses. Generalizations to 2D and 3D configurations are possible.
Probabilistic Teleportation of One-Particle State of S-level
NASA Astrophysics Data System (ADS)
Yan, Feng-Li; Bai, Yan-Kui
2003-09-01
A scheme for probabilistically teleporting an unknown one-particle state of S-level by a group of pairs of partially entangled 2-level particle state is proposed. In this scheme unitary transformation and local measurement take the place of Bell state measurement, then proper unitary transformation and the measurement on an auxiliary qubit with the aid of classical communication are performed. In this way the unknown one-particle state of S-level can be transferred onto a group of remote 2-level particles with certain probability. Furthermore, the receiver can recover the initial signal state on an S-level particle at his hand. The project supported by Natural Science Foundation of Hebei Province of China
The Cirrus Parcel Model Comparison Project. Phase 1
NASA Technical Reports Server (NTRS)
Lin, Ruei-Fong; Starr, D.; DeMott, P.; Cotten, R.; Jensen, E.; Sassen, K.
2000-01-01
The cirrus Parcel Model Comparison Project involves the systematic comparison of current models of ice crystal nucleation and growth for specified, typical, cirrus cloud environments. In Phase 1 of the project reported here, simulated cirrus cloud microphysical properties are compared for situations of "warm" (-40 C) and "cold" (-60 C) cirrus subject to updrafts of 4, 20 and 100 centimeters per second, respectively. Five models are participating in the project. These models employ explicit microphysical schemes wherein the size distribution of each class of particles (aerosols and ice crystals) is resolved into bins. Simulations are made including both homogeneous and heterogeneous ice nucleation mechanisms. A single initial aerosol population of sulfuric acid particles is prescribed for all simulations. To isolate the treatment of the homogeneous freezing (of haze drops) nucleation process, the heterogeneous nucleation mechanism is disabled for a second parallel set of simulations. Qualitative agreement is found amongst the models for the homogeneous-nucleation-only simulations, e.g., the number density of nucleated ice crystals increases with the strength of the prescribed updraft. However, non-negligible quantitative differences are found. Systematic bias exists between results of a model based on a modified classical theory approach and models using an effective freezing temperature approach to the treatment of nucleation. Each approach is constrained by critical freezing data from laboratory studies. This information is necessary, but not sufficient, to construct consistent formulae for the two approaches. Large haze particles may deviate considerably from equilibrium size in moderate to strong updrafts (20-100 centimeters per second) at -60 C when the commonly invoked equilibrium assumption is lifted. The resulting difference in particle-size-dependent solution concentration of haze particles may significantly affect the ice nucleation rate during the initial nucleation interval. The uptake rate for water vapor excess by ice crystals is another key component regulating the total number of nucleated ice crystals. This rate, the product of ice number concentration and ice crystal diffusional growth rate, partially controls the peak nucleation rate achieved in an air parcel and the duration of the active nucleation time period.
Dynamics of Gas Near the Galactic Centre
NASA Astrophysics Data System (ADS)
Jenkins, A.; Binney, J.
1994-10-01
We simulate the flow of gas in the Binney et al. model of the bar at the centre of the Milky Way. We argue that the flow of a clumpy interstellar medium is most realistically simulated by a sticky-particle scheme, and investigate two such schemes. In both schemes orbits close to the cusped orbit rapidly become depopulated. This depopulation places a lower limit on the pattern speed since it implies that in the (1, v) plane the cusped orbit lies significantly inside the peak of the Hi terminal-velocity envelope at 1 20. We find that the size of the central molecular disc and the magnitudes of the observed forbidden velocities constrain the eccentricity of the Galactic bar to values similar to that arbitrarily assumed by Binney et al. We study the accretion by the nuclear disc of matter shed by dying bulge stars. We estimate that mass loss by the bulge can replenish the Hi in the nuclear disc within two bar rotation periods, in good agreement with the predictions of the simulations. When accretion of gas from the bulge is included, fine-scale irregular structure persists in the nuclear disc. This structure gives rise to features in longitude-velocity plots which depend significantly on viewing angle, and consequently give rise to asymmetries in longitude. These asymmetries are, however, much less pronounced than those in the observational plots. We conclude that the addition of hydrodynamics to the Binney et al. model does not resolve some important discrepancies between theory and observation. The model's basic idea does, however, have high a priori probability and has enjoyed some significant successes, while a number of potentially important physical processes - most notably the self-gravity of interstellar gas - are neglected in the present simulations. In view of the deficiencies of our simulations and interesting parallels we do observe between simulated and observational longitude-velocity plots, we believe it would be premature to reject the Binney et al. model prior to exploring high-quality three-dimensional simulations that include self-gravitating stars and gas. Key words: accretion, accretion discs - ISM: kinematics and dynamics - ISM: structure -Galaxy: centre - Galaxy: kinematics and dynamics - radio lines: ISM.
Fish tracking by combining motion based segmentation and particle filtering
NASA Astrophysics Data System (ADS)
Bichot, E.; Mascarilla, L.; Courtellemont, P.
2006-01-01
In this paper, we suggest a new importance sampling scheme to improve a particle filtering based tracking process. This scheme relies on exploitation of motion segmentation. More precisely, we propagate hypotheses from particle filtering to blobs of similar motion to target. Hence, search is driven toward regions of interest in the state space and prediction is more accurate. We also propose to exploit segmentation to update target model. Once the moving target has been identified, a representative model is learnt from its spatial support. We refer to this model in the correction step of the tracking process. The importance sampling scheme and the strategy to update target model improve the performance of particle filtering in complex situations of occlusions compared to a simple Bootstrap approach as shown by our experiments on real fish tank sequences.
NASA Astrophysics Data System (ADS)
Wang, Dong; Hu, You-Di; Wang, Zhe-Qiang; Ye, Liu
2015-06-01
We develop two efficient measurement-based schemes for remotely preparing arbitrary three- and four-particle W-class entangled states by utilizing genuine tripartite Greenberg-Horn-Zeilinger-type states as quantum channels, respectively. Through appropriate local operations and classical communication, the desired states can be faithfully retrieved at the receiver's place with certain probability. Compared with the previously existing schemes, the success probability in current schemes is greatly increased. Moreover, the required classical communication cost is calculated as well. Further, several attractive discussions on the properties of the presented schemes, including the success probability and reducibility, are made. Remarkably, the proposed schemes can be faithfully achieved with unity total success probability when the employed channels are reduced into maximally entangled ones.
A hybrid Lagrangian Voronoi-SPH scheme
NASA Astrophysics Data System (ADS)
Fernandez-Gutierrez, D.; Souto-Iglesias, A.; Zohdi, T. I.
2018-07-01
A hybrid Lagrangian Voronoi-SPH scheme, with an explicit weakly compressible formulation for both the Voronoi and SPH sub-domains, has been developed. The SPH discretization is substituted by Voronoi elements close to solid boundaries, where SPH consistency and boundary conditions implementation become problematic. A buffer zone to couple the dynamics of both sub-domains is used. This zone is formed by a set of particles where fields are interpolated taking into account SPH particles and Voronoi elements. A particle may move in or out of the buffer zone depending on its proximity to a solid boundary. The accuracy of the coupled scheme is discussed by means of a set of well-known verification benchmarks.
Loeffler, Troy D; Sepehri, Aliasghar; Chen, Bin
2015-09-08
Reformulation of existing Monte Carlo algorithms used in the study of grand canonical systems has yielded massive improvements in efficiency. Here we present an energy biasing scheme designed to address targeting issues encountered in particle swap moves using sophisticated algorithms such as the Aggregation-Volume-Bias and Unbonding-Bonding methods. Specifically, this energy biasing scheme allows a particle to be inserted to (or removed from) a region that is more acceptable. As a result, this new method showed a several-fold increase in insertion/removal efficiency in addition to an accelerated rate of convergence for the thermodynamic properties of the system.
A hybrid Lagrangian Voronoi-SPH scheme
NASA Astrophysics Data System (ADS)
Fernandez-Gutierrez, D.; Souto-Iglesias, A.; Zohdi, T. I.
2017-11-01
A hybrid Lagrangian Voronoi-SPH scheme, with an explicit weakly compressible formulation for both the Voronoi and SPH sub-domains, has been developed. The SPH discretization is substituted by Voronoi elements close to solid boundaries, where SPH consistency and boundary conditions implementation become problematic. A buffer zone to couple the dynamics of both sub-domains is used. This zone is formed by a set of particles where fields are interpolated taking into account SPH particles and Voronoi elements. A particle may move in or out of the buffer zone depending on its proximity to a solid boundary. The accuracy of the coupled scheme is discussed by means of a set of well-known verification benchmarks.
Communication Optimizations for a Wireless Distributed Prognostic Framework
NASA Technical Reports Server (NTRS)
Saha, Sankalita; Saha, Bhaskar; Goebel, Kai
2009-01-01
Distributed architecture for prognostics is an essential step in prognostic research in order to enable feasible real-time system health management. Communication overhead is an important design problem for such systems. In this paper we focus on communication issues faced in the distributed implementation of an important class of algorithms for prognostics - particle filters. In spite of being computation and memory intensive, particle filters lend well to distributed implementation except for one significant step - resampling. We propose new resampling scheme called parameterized resampling that attempts to reduce communication between collaborating nodes in a distributed wireless sensor network. Analysis and comparison with relevant resampling schemes is also presented. A battery health management system is used as a target application. A new resampling scheme for distributed implementation of particle filters has been discussed in this paper. Analysis and comparison of this new scheme with existing resampling schemes in the context for minimizing communication overhead have also been discussed. Our proposed new resampling scheme performs significantly better compared to other schemes by attempting to reduce both the communication message length as well as number total communication messages exchanged while not compromising prediction accuracy and precision. Future work will explore the effects of the new resampling scheme in the overall computational performance of the whole system as well as full implementation of the new schemes on the Sun SPOT devices. Exploring different network architectures for efficient communication is an importance future research direction as well.
NASA Astrophysics Data System (ADS)
Dai, Hong-Yi; Kuang, Le-Man; Li, Cheng-Zu
2005-07-01
We propose a scheme to probabilistically teleport an unknown arbitrary three-level two-particle state by using two partial entangled two-particle states of three-level as the quantum channel. The classical communication cost required in the ideal probabilistic teleportation process is also calculated. This scheme can be directly generalized to teleport an unknown and arbitrary three-level K-particle state by using K partial entangled two-particle states of three-level as the quantum channel. The project supported by National Fundamental Research Program of China under Grant No. 2001CB309310, National Natural Science Foundation of China under Grant Nos. 10404039 and 10325523
An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm
NASA Astrophysics Data System (ADS)
Chen, G.; Chacón, L.; Barnes, D. C.
2011-08-01
This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov-Poisson formulation), ours is based on a nonlinearly converged Vlasov-Ampére (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant-Friedrichs-Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicit time steps (unlike the earlier "energy-conserving" explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton-Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.
A program code generator for multiphysics biological simulation using markup languages.
Amano, Akira; Kawabata, Masanari; Yamashita, Yoshiharu; Rusty Punzalan, Florencio; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi
2012-01-01
To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify the simulation conditions, target computation resources or calculation methods. In the complex biological function simulation software, there are 1) model equations, 2) boundary conditions and 3) calculation schemes. Use of description model file is useful for first point and partly second point, however, third point is difficult to handle for various calculation schemes which is required for simulation models constructed from two or more elementary models. We introduce a simulation software generation system which use description language based description of coupling calculation scheme together with cell model description file. By using this software, we can easily generate biological simulation code with variety of coupling calculation schemes. To show the efficiency of our system, example of coupling calculation scheme with three elementary models are shown.
NASA Astrophysics Data System (ADS)
Mingari, Leonardo A.; Collini, Estela A.; Folch, Arnau; Báez, Walter; Bustos, Emilce; Soledad Osores, María; Reckziegel, Florencia; Alexander, Peter; Viramonte, José G.
2017-06-01
On 13 June 2015, the London Volcanic Ash Advisory Centre (VAAC) warned the Buenos Aires VAAC about a possible volcanic eruption from the Nevados Ojos del Salado volcano (6879 m), located in the Andes mountain range on the border between Chile and Argentina. A volcanic ash cloud was detected by the SEVIRI instrument on board the Meteosat Second Generation (MSG) satellites from 14:00 UTC on 13 June. In this paper, we provide the first comprehensive description of this event through observations and numerical simulations. Our results support the hypothesis that the phenomenon was caused by wind remobilization of ancient pyroclastic deposits (ca. 4.5 ka Cerro Blanco eruption) from the Bolsón de Fiambalá (Fiambalá Basin) in northwestern Argentina. We have investigated the spatiotemporal distribution of aerosols and the emission process over complex terrain to gain insight into the key role played by the orography and the condition that triggered the long-range transport episode. Numerical simulations of windblown dust were performed using the ARW (Advanced Research WRF) core of the WRF (Weather Research and Forecasting) model (WRF-ARW) and FALL3D modeling system with meteorological fields downscaled to a spatial resolution of 2 km in order to resolve the complex orography of the area. Results indicate that favorable conditions to generate dust uplifting occurred in northern Fiambalá Basin, where orographic effects caused strong surface winds. According to short-range numerical simulations, dust particles were confined to near-ground layers around the emission areas. In contrast, dust aerosols were injected up to 5-6 km high in central and southern regions of the Fiambalá Basin, where intense ascending airflows are driven by horizontal convergence. Long-range transport numerical simulations were also performed to model the dust cloud spreading over northern Argentina. Results of simulated vertical particle column mass were compared with the MSG-SEVIRI retrieval product. We tested two numerical schemes: with the default configuration of the FALL3D model, we found difficulties to simulate transport through orographic barriers, whereas an alternative configuration, using a numerical scheme to more accurately compute the horizontal advection in abrupt terrains, substantially improved the model performance.
User guide for MODPATH version 6 - A particle-tracking model for MODFLOW
Pollock, David W.
2012-01-01
MODPATH is a particle-tracking post-processing model that computes three-dimensional flow paths using output from groundwater flow simulations based on MODFLOW, the U.S. Geological Survey (USGS) finite-difference groundwater flow model. This report documents MODPATH version 6. Previous versions were documented in USGS Open-File Reports 89-381 and 94-464. The program uses a semianalytical particle-tracking scheme that allows an analytical expression of a particle's flow path to be obtained within each finite-difference grid cell. A particle's path is computed by tracking the particle from one cell to the next until it reaches a boundary, an internal sink/source, or satisfies another termination criterion. Data input to MODPATH consists of a combination of MODFLOW input data files, MODFLOW head and flow output files, and other input files specific to MODPATH. Output from MODPATH consists of several output files, including a number of particle coordinate output files intended to serve as input data for other programs that process, analyze, and display the results in various ways. MODPATH is written in FORTRAN and can be compiled by any FORTRAN compiler that fully supports FORTRAN-2003 or by most commercially available FORTRAN-95 compilers that support the major FORTRAN-2003 language extensions.
Energy spread minimization in a cascaded laser wakefield accelerator via velocity bunching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zhijun; Li, Wentao; Wang, Wentao
2016-05-15
We propose a scheme to minimize the energy spread of an electron beam (e-beam) in a cascaded laser wakefield accelerator to the one-thousandth-level by inserting a stage to compress its longitudinal spatial distribution. In this scheme, three-segment plasma stages are designed for electron injection, e-beam length compression, and e-beam acceleration, respectively. The trapped e-beam in the injection stage is transferred to the zero-phase region at the center of one wakefield period in the compression stage where the length of the e-beam can be greatly shortened owing to the velocity bunching. After being seeded into the third stage for acceleration, themore » e-beam can be accelerated to a much higher energy before its energy chirp is compensated owing to the shortened e-beam length. A one-dimensional theory and two-dimensional particle-in-cell simulations have demonstrated this scheme and an e-beam with 0.2% rms energy spread and low transverse emittance could be generated without loss of charge.« less
Two particle tracking and detection in a single Gaussian beam optical trap.
Praveen, P; Yogesha; Iyengar, Shruthi S; Bhattacharya, Sarbari; Ananthamurthy, Sharath
2016-01-20
We have studied in detail the situation wherein two microbeads are trapped axially in a single-beam Gaussian intensity profile optical trap. We find that the corner frequency extracted from a power spectral density analysis of intensity fluctuations recorded on a quadrant photodetector (QPD) is dependent on the detection scheme. Using forward- and backscattering detection schemes with single and two laser wavelengths along with computer simulations, we conclude that fluctuations detected in backscattering bear true position information of the bead encountered first in the beam propagation direction. Forward scattering, on the other hand, carries position information of both beads with substantial contribution from the bead encountered first along the beam propagation direction. Mie scattering analysis further reveals that the interference term from the scattering of the two beads contributes significantly to the signal, precluding the ability to resolve the positions of the individual beads in forward scattering. In QPD-based detection schemes, detection through backscattering, thereby, is imperative to track the true displacements of axially trapped microbeads for possible studies on light-mediated interbead interactions.
A shock-capturing SPH scheme based on adaptive kernel estimation
NASA Astrophysics Data System (ADS)
Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime
2006-02-01
Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.
A unified gas-kinetic scheme for continuum and rarefied flows IV: Full Boltzmann and model equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Chang, E-mail: cliuaa@ust.hk; Xu, Kun, E-mail: makxu@ust.hk; Sun, Quanhua, E-mail: qsun@imech.ac.cn
Fluid dynamic equations are valid in their respective modeling scales, such as the particle mean free path scale of the Boltzmann equation and the hydrodynamic scale of the Navier–Stokes (NS) equations. With a variation of the modeling scales, theoretically there should have a continuous spectrum of fluid dynamic equations. Even though the Boltzmann equation is claimed to be valid in all scales, many Boltzmann solvers, including direct simulation Monte Carlo method, require the cell resolution to the order of particle mean free path scale. Therefore, they are still single scale methods. In order to study multiscale flow evolution efficiently, themore » dynamics in the computational fluid has to be changed with the scales. A direct modeling of flow physics with a changeable scale may become an appropriate approach. The unified gas-kinetic scheme (UGKS) is a direct modeling method in the mesh size scale, and its underlying flow physics depends on the resolution of the cell size relative to the particle mean free path. The cell size of UGKS is not limited by the particle mean free path. With the variation of the ratio between the numerical cell size and local particle mean free path, the UGKS recovers the flow dynamics from the particle transport and collision in the kinetic scale to the wave propagation in the hydrodynamic scale. The previous UGKS is mostly constructed from the evolution solution of kinetic model equations. Even though the UGKS is very accurate and effective in the low transition and continuum flow regimes with the time step being much larger than the particle mean free time, it still has space to develop more accurate flow solver in the region, where the time step is comparable with the local particle mean free time. In such a scale, there is dynamic difference from the full Boltzmann collision term and the model equations. This work is about the further development of the UGKS with the implementation of the full Boltzmann collision term in the region where it is needed. The central ingredient of the UGKS is the coupled treatment of particle transport and collision in the flux evaluation across a cell interface, where a continuous flow dynamics from kinetic to hydrodynamic scales is modeled. The newly developed UGKS has the asymptotic preserving (AP) property of recovering the NS solutions in the continuum flow regime, and the full Boltzmann solution in the rarefied regime. In the mostly unexplored transition regime, the UGKS itself provides a valuable tool for the non-equilibrium flow study. The mathematical properties of the scheme, such as stability, accuracy, and the asymptotic preserving, will be analyzed in this paper as well.« less
Radiation reaction effect on laser driven auto-resonant particle acceleration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sagar, Vikram; Sengupta, Sudip; Kaw, P. K.
2015-12-15
The effects of radiation reaction force on laser driven auto-resonant particle acceleration scheme are studied using Landau-Lifshitz equation of motion. These studies are carried out for both linear and circularly polarized laser fields in the presence of static axial magnetic field. From the parametric study, a radiation reaction dominated region has been identified in which the particle dynamics is greatly effected by this force. In the radiation reaction dominated region, the two significant effects on particle dynamics are seen, viz., (1) saturation in energy gain by the initially resonant particle and (2) net energy gain by an initially non-resonant particlemore » which is caused due to resonance broadening. It has been further shown that with the relaxation of resonance condition and with optimum choice of parameters, this scheme may become competitive with the other present-day laser driven particle acceleration schemes. The quantum corrections to the Landau-Lifshitz equation of motion have also been taken into account. The difference in the energy gain estimates of the particle by the quantum corrected and classical Landau-Lifshitz equation is found to be insignificant for the present day as well as upcoming laser facilities.« less
Simulation of the West African Monsoon using the MIT Regional Climate Model
NASA Astrophysics Data System (ADS)
Im, Eun-Soon; Gianotti, Rebecca L.; Eltahir, Elfatih A. B.
2013-04-01
We test the performance of the MIT Regional Climate Model (MRCM) in simulating the West African Monsoon. MRCM introduces several improvements over Regional Climate Model version 3 (RegCM3) including coupling of Integrated Biosphere Simulator (IBIS) land surface scheme, a new albedo assignment method, a new convective cloud and rainfall auto-conversion scheme, and a modified boundary layer height and cloud scheme. Using MRCM, we carried out a series of experiments implementing two different land surface schemes (IBIS and BATS) and three convection schemes (Grell with the Fritsch-Chappell closure, standard Emanuel, and modified Emanuel that includes the new convective cloud scheme). Our analysis primarily focused on comparing the precipitation characteristics, surface energy balance and large scale circulations against various observations. We document a significant sensitivity of the West African monsoon simulation to the choices of the land surface and convection schemes. In spite of several deficiencies, the simulation with the combination of IBIS and modified Emanuel schemes shows the best performance reflected in a marked improvement of precipitation in terms of spatial distribution and monsoon features. In particular, the coupling of IBIS leads to representations of the surface energy balance and partitioning that are consistent with observations. Therefore, the major components of the surface energy budget (including radiation fluxes) in the IBIS simulations are in better agreement with observation than those from our BATS simulation, or from previous similar studies (e.g Steiner et al., 2009), both qualitatively and quantitatively. The IBIS simulations also reasonably reproduce the dynamical structure of vertically stratified behavior of the atmospheric circulation with three major components: westerly monsoon flow, African Easterly Jet (AEJ), and Tropical Easterly Jet (TEJ). In addition, since the modified Emanuel scheme tends to reduce the precipitation amount, it improves the precipitation over regions suffering from systematic wet bias.
Conservative properties of finite difference schemes for incompressible flow
NASA Technical Reports Server (NTRS)
Morinishi, Youhei
1995-01-01
The purpose of this research is to construct accurate finite difference schemes for incompressible unsteady flow simulations such as LES (large-eddy simulation) or DNS (direct numerical simulation). In this report, conservation properties of the continuity, momentum, and kinetic energy equations for incompressible flow are specified as analytical requirements for a proper set of discretized equations. Existing finite difference schemes in staggered grid systems are checked for satisfaction of the requirements. Proper higher order accurate finite difference schemes in a staggered grid system are then proposed. Plane channel flow is simulated using the proposed fourth order accurate finite difference scheme and the results compared with those of the second order accurate Harlow and Welch algorithm.
Importance sampling variance reduction for the Fokker–Planck rarefied gas particle method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collyer, B.S., E-mail: benjamin.collyer@gmail.com; London Mathematical Laboratory, 14 Buckingham Street, London WC2N 6DF; Connaughton, C.
The Fokker–Planck approximation to the Boltzmann equation, solved numerically by stochastic particle schemes, is used to provide estimates for rarefied gas flows. This paper presents a variance reduction technique for a stochastic particle method that is able to greatly reduce the uncertainty of the estimated flow fields when the characteristic speed of the flow is small in comparison to the thermal velocity of the gas. The method relies on importance sampling, requiring minimal changes to the basic stochastic particle scheme. We test the importance sampling scheme on a homogeneous relaxation, planar Couette flow and a lid-driven-cavity flow, and find thatmore » our method is able to greatly reduce the noise of estimated quantities. Significantly, we find that as the characteristic speed of the flow decreases, the variance of the noisy estimators becomes independent of the characteristic speed.« less
Dynamical origin of non-thermal states in galactic filaments
NASA Astrophysics Data System (ADS)
Di Cintio, Pierfrancesco; Gupta, Shamik; Casetti, Lapo
2018-03-01
Observations strongly suggest that filaments in galactic molecular clouds are in a non-thermal state. As a simple model of a filament, we study a two-dimensional system of self-gravitating point particles by means of numerical simulations of the dynamics, with various methods: direct N-body integration of the equations of motion, particle-in-cell simulations, and a recently developed numerical scheme that includes multiparticle collisions in a particle-in-cell approach. Studying the collapse of Gaussian overdensities, we find that after the damping of virial oscillations the system settles in a non-thermal steady state whose radial density profile is similar to the observed ones, thus suggesting a dynamical origin of the non-thermal states observed in real filaments. Moreover, for sufficiently cold collapses, the density profiles are anticorrelated with the kinetic temperature, i.e. exhibit temperature inversion, again a feature that has been found in some observations of filaments. The same happens in the state reached after a strong perturbation of an initially isothermal cylinder. Finally, we discuss our results in the light of recent findings in other contexts (including non-astrophysical ones) and argue that the same kind of non-thermal states may be observed in any physical system with long-range interactions.
Efficient Conservative Reformulation Schemes for Lithium Intercalation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urisanga, PC; Rife, D; De, S
Porous electrode theory coupled with transport and reaction mechanisms is a widely used technique to model Li-ion batteries employing an appropriate discretization or approximation for solid phase diffusion with electrode particles. One of the major difficulties in simulating Li-ion battery models is the need to account for solid phase diffusion in a second radial dimension r, which increases the computation time/cost to a great extent. Various methods that reduce the computational cost have been introduced to treat this phenomenon, but most of them do not guarantee mass conservation. The aim of this paper is to introduce an inherently mass conservingmore » yet computationally efficient method for solid phase diffusion based on Lobatto III A quadrature. This paper also presents coupling of the new solid phase reformulation scheme with a macro-homogeneous porous electrode theory based pseudo 20 model for Li-ion battery. (C) The Author(s) 2015. Published by ECS. All rights reserved.« less
Mitigate the tent-induced perturbation in ignition capsules by supersonic radiation propagation
NASA Astrophysics Data System (ADS)
Dai, Zhensheng; Gu, Jianfa; Zheng, Wudi
2017-10-01
In the inertial confinement fusion (ICF) scheme, to trap the alpha particle products of the D-T reaction, the capsules needs to be imploded and compressed with high symmetry In the laser indirect drive scheme, the capsules are held at the center of high-Z hohlraums by thin membranes (tents). However, the tents are recognized as one of the most important contributors to hot spot asymmetries, areal density perturbations and reduced performance. To improve the capsule implosion performance, various alternatives such as the micro-scale rods, a larger fill-tube and a low-density foam layer around the capsule have been presented. Our simulations show that the radiation propagates supersonically in the low-density foam layer and starts to ablate the capsule before the perturbations induced by the tents reach the ablating fronts. The tent induced perturbations are remarkably weakened when they are propagating in the blow-off plasma.
NASA Astrophysics Data System (ADS)
Degtyarev, Alexander; Khramushin, Vasily
2016-02-01
The paper deals with the computer implementation of direct computational experiments in fluid mechanics, constructed on the basis of the approach developed by the authors. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the effciency of the algorithms developed by numerical procedures with natural parallelism. The paper examines the main objects and operations that let you manage computational experiments and monitor the status of the computation process. Special attention is given to a) realization of tensor representations of numerical schemes for direct simulation; b) realization of representation of large particles of a continuous medium motion in two coordinate systems (global and mobile); c) computing operations in the projections of coordinate systems, direct and inverse transformation in these systems. Particular attention is paid to the use of hardware and software of modern computer systems.
NASA Astrophysics Data System (ADS)
Zhao, Xue-Yan; Xie, Bai-Song; Wu, Hai-Cheng; Zhang, Shan; Hong, Xue-Ren; Aimidula, Aimierding
2012-03-01
An optimizing and alternative scheme for electron injection and acceleration in the wake bubble driven by an ultraintense laser pulse is presented. In this scheme, the dense-plasma wall with an inner diameter matching the expected bubble size is placed along laser propagation direction. Meanwhile, a dense-plasma block dense-plasma is adhered inward transversely at some certain position of the wall. Particle-in-cell simulations are performed, which demonstrate that the block plays an important role in the first electron injection and acceleration. The result shows that a collimated electron bunch with a total number of about 4.04×108μm-1 can be generated and accelerated stably to 1.61 GeV peak energy with 2.6% energy spread. The block contributes about 50% to the accelerated electron injection bunch by tracing and sorting statistically the source.
Adaptive AOA-aided TOA self-positioning for mobile wireless sensor networks.
Wen, Chih-Yu; Chan, Fu-Kai
2010-01-01
Location-awareness is crucial and becoming increasingly important to many applications in wireless sensor networks. This paper presents a network-based positioning system and outlines recent work in which we have developed an efficient principled approach to localize a mobile sensor using time of arrival (TOA) and angle of arrival (AOA) information employing multiple seeds in the line-of-sight scenario. By receiving the periodic broadcasts from the seeds, the mobile target sensors can obtain adequate observations and localize themselves automatically. The proposed positioning scheme performs location estimation in three phases: (I) AOA-aided TOA measurement, (II) Geometrical positioning with particle filter, and (III) Adaptive fuzzy control. Based on the distance measurements and the initial position estimate, adaptive fuzzy control scheme is applied to solve the localization adjustment problem. The simulations show that the proposed approach provides adaptive flexibility and robust improvement in position estimation.
Teaching the Conceptual Scheme "The Particle Nature of Matter" in the Elementary School.
ERIC Educational Resources Information Center
Pella, Milton O.; And Others
Conclusions of an extensive project aimed to prepare lessons and associated materials related to teaching concepts included in the scheme "The Particle Nature of Matter" for grades two through six are presented. The hypothesis formulated for the project was that children in elementary schools can learn theoretical concepts related to the particle…
NASA Astrophysics Data System (ADS)
Martin-Bragado, I.; Castrillo, P.; Jaraiz, M.; Pinacho, R.; Rubio, J. E.; Barbolla, J.; Moroz, V.
2005-09-01
Atomistic process simulation is expected to play an important role for the development of next generations of integrated circuits. This work describes an approach for modeling electric charge effects in a three-dimensional atomistic kinetic Monte Carlo process simulator. The proposed model has been applied to the diffusion of electrically active boron and arsenic atoms in silicon. Several key aspects of the underlying physical mechanisms are discussed: (i) the use of the local Debye length to smooth out the atomistic point-charge distribution, (ii) algorithms to correctly update the charge state in a physically accurate and computationally efficient way, and (iii) an efficient implementation of the drift of charged particles in an electric field. High-concentration effects such as band-gap narrowing and degenerate statistics are also taken into account. The efficiency, accuracy, and relevance of the model are discussed.
Effects of ramp reset pulses on the address discharge in a shadow mask plasma display panel
NASA Astrophysics Data System (ADS)
Yang, Lanlan; Tu, Yan; Zhang, Xiong; Jiang, Youyan; Zhang, Jian; Wang, Baoping
2007-05-01
A two-dimensional self-consistent numerical simulation model is used to analyse the effects of the ramp reset pulses on the address discharge in a shadow mask plasma display panel (SM-PDP). Some basic parameters such as the slope of the ramp pulse and the terminal voltage of the ramp reset period are varied to investigate their effects. The simulation results illustrate that the wall voltage is mainly decided by the terminal voltage and the firing voltage at the end of the ramp reset period. Moreover, the variation of the ramp slope will also bring a few modifications to the wall voltage. The priming particles in the beginning of the addressing period are related to the slope of the ramping down voltage pulse. The simulation results can help us optimize the driving scheme of the SM-PDP.
Simulations of single-particle imaging of hydrated proteins with x-ray free-electron lasers
NASA Astrophysics Data System (ADS)
Fortmann-Grote, C.; Bielecki, J.; Jurek, Z.; Santra, R.; Ziaja-Motyka, B.; Mancuso, A. P.
2017-08-01
We employ start-to-end simulations to model coherent diffractive imaging of single biomolecules using x-ray free electron lasers. This technique is expected to yield new structural information about biologically relevant macromolecules thanks to the ability to study the isolated sample in its natural environment as opposed to crystallized or cryogenic samples. The effect of the solvent on the diffraction pattern and interpretability of the data is an open question. We present first results of calculations where the solvent is taken into account explicitly. They were performed with a molecular dynamics scheme for a sample consisting of a protein and a hydration layer of varying thickness. Through R-factor analysis of the simulated diffraction patterns from hydrated samples, we show that the scattering background from realistic hydration layers of up to 3 Å thickness presents no obstacle for the resolution of molecular structures at the sub-nm level.
NASA Astrophysics Data System (ADS)
Zhao, Wenjie; Peng, Yiran; Wang, Bin; Yi, Bingqi; Lin, Yanluan; Li, Jiangnan
2018-05-01
A newly implemented Baum-Yang scheme for simulating ice cloud optical properties is compared with existing schemes (Mitchell and Fu schemes) in a standalone radiative transfer model and in the global climate model (GCM) Community Atmospheric Model Version 5 (CAM5). This study systematically analyzes the effect of different ice cloud optical schemes on global radiation and climate by a series of simulations with a simplified standalone radiative transfer model, atmospheric GCM CAM5, and a comprehensive coupled climate model. Results from the standalone radiative model show that Baum-Yang scheme yields generally weaker effects of ice cloud on temperature profiles both in shortwave and longwave spectrum. CAM5 simulations indicate that Baum-Yang scheme in place of Mitchell/Fu scheme tends to cool the upper atmosphere and strengthen the thermodynamic instability in low- and mid-latitudes, which could intensify the Hadley circulation and dehydrate the subtropics. When CAM5 is coupled with a slab ocean model to include simplified air-sea interaction, reduced downward longwave flux to surface in Baum-Yang scheme mitigates ice-albedo feedback in the Arctic as well as water vapor and cloud feedbacks in low- and mid-latitudes, resulting in an overall temperature decrease by 3.0/1.4 °C globally compared with Mitchell/Fu schemes. Radiative effect and climate feedback of the three ice cloud optical schemes documented in this study can be referred for future improvements on ice cloud simulation in CAM5.
NASA Astrophysics Data System (ADS)
Wang, H.; Kravitz, B.; Rasch, P. J.; Morrison, H.; Solomon, A.
2014-12-01
Previous process-oriented modeling studies have highlighted the dependence of effectiveness of cloud brightening by aerosols on cloud regimes in warm marine boundary layer. Cloud microphysical processes in clouds that contain ice, and hence the mechanisms that drive aerosol-cloud interactions, are more complicated than in warm clouds. Interactions between ice particles and liquid drops add additional levels of complexity to aerosol effects. A cloud-resolving model is used to study aerosol-cloud interactions in the Arctic triggered by strong aerosol emissions, through either geoengineering injection or concentrated sources such as shipping and fires. An updated cloud microphysical scheme with prognostic aerosol and cloud particle numbers is employed. Model simulations are performed in pure super-cooled liquid and mixed-phase clouds, separately, with or without an injection of aerosols into either a clean or a more polluted Arctic boundary layer. Vertical mixing and cloud scavenging of particles injected from the surface is still quite efficient in the less turbulent cold environment. Overall, the injection of aerosols into the Arctic boundary layer can delay the collapse of the boundary layer and increase low-cloud albedo. The pure liquid clouds are more susceptible to the increase in aerosol number concentration than the mixed-phase clouds. Rain production processes are more effectively suppressed by aerosol injection, whereas ice precipitation (snow) is affected less; thus the effectiveness of brightening mixed-phase clouds is lower than for liquid-only clouds. Aerosol injection into a clean boundary layer results in a greater cloud albedo increase than injection into a polluted one, consistent with current knowledge about aerosol-cloud interactions. Unlike previous studies investigating warm clouds, the impact of dynamical feedback due to precipitation changes is small. According to these results, which are dependent upon the representation of ice nucleation processes in the employed microphysical scheme, Arctic geoengineering/shipping could have substantial local radiative effects, but is unlikely to be effective as the sole means of counterbalancing warming due to climate change.
Multi-resolution Delta-plus-SPH with tensile instability control: Towards high Reynolds number flows
NASA Astrophysics Data System (ADS)
Sun, P. N.; Colagrossi, A.; Marrone, S.; Antuono, M.; Zhang, A. M.
2018-03-01
It is well known that the use of SPH models in simulating flow at high Reynolds numbers is limited because of the tensile instability inception in the fluid region characterized by high vorticity and negative pressure. In order to overcome this issue, the δ+-SPH scheme is modified by implementing a Tensile Instability Control (TIC). The latter consists of switching the momentum equation to a non-conservative formulation in the unstable flow regions. The loss of conservation properties is shown to induce small errors, provided that the particle distribution is regular. The latter condition can be ensured thanks to the implementation of a Particle Shifting Technique (PST). The novel variant of the δ+-SPH is proved to be effective in preventing the onset of tensile instability. Several challenging benchmark tests involving flows past bodies at large Reynolds numbers have been used. Within this a simulation characterized by a deforming foil that resembles a fish-like swimming body is used as a practical application of the δ+-SPH model in biological fluid mechanics.
De Nicola, Antonio; Kawakatsu, Toshihiro; Milano, Giuseppe
2014-12-09
A procedure based on Molecular Dynamics (MD) simulations employing soft potentials derived from self-consistent field (SCF) theory (named MD-SCF) able to generate well-relaxed all-atom structures of polymer melts is proposed. All-atom structures having structural correlations indistinguishable from ones obtained by long MD relaxations have been obtained for poly(methyl methacrylate) (PMMA) and poly(ethylene oxide) (PEO) melts. The proposed procedure leads to computational costs mainly related on system size rather than to the chain length. Several advantages of the proposed procedure over current coarse-graining/reverse mapping strategies are apparent. No parametrization is needed to generate relaxed structures of different polymers at different scales or resolutions. There is no need for special algorithms or back-mapping schemes to change the resolution of the models. This characteristic makes the procedure general and its extension to other polymer architectures straightforward. A similar procedure can be easily extended to the generation of all-atom structures of block copolymer melts and polymer nanocomposites.
Exact charge and energy conservation in implicit PIC with mapped computational meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Guangye; Barnes, D. C.
This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov Poisson formulation), ours is based on a nonlinearly converged Vlasov Amp re (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant Friedrichs Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicitmore » time steps (unlike the earlier energy-conserving explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.« less
Exploration of thermal counterflow in He II using particle tracking velocimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mastracci, Brian; Guo, Wei
Flow visualization using particle image velocimetry (PIV) and particularly particle tracking velocimetry (PTV) has been applied to thermal counterflow in He II for nearly two decades now, but the results remain difficult to interpret because tracer particle motion can be influenced by both the normal fluid and superfluid components of He II as well as the quantized vortex tangle. For instance, in one early experiment it was observed (using PTV) that tracer particles move at the normal fluid velocity v n, while in another it was observed (using PIV) that particles move at v n/2. Besides the different visualization methods,more » the range of applied heat flux investigated by these experiments differed by an order of magnitude. To resolve this apparent discrepancy and explore the statistics of particle motion in thermal counterflow, we apply the PTV method to a wide range of heat flux at a number of different fluid temperatures. In our analysis, we introduce a scheme for analyzing the velocity of particles presumably moving with the normal fluid separately from those presumably influenced by the quantized vortex tangle. Our results show that for lower heat flux there are two distinct peaks in the streamwise particle velocity probability density function (PDF), with one centered at the normal fluid velocity v n (named G2 for convenience) while the other is centered near v n/2 (G1). For higher heat flux there is a single peak centered near v n/2 (G3). Using our separation scheme, we show quantitatively that there is no size difference between the particles contributing to G1 and G2. We also show that nonclassical features of the transverse particle velocity PDF arise entirely from G1, while the corresponding PDF for G2 exhibits the classical Gaussian form. The G2 transverse velocity fluctuation, backed up by second sound attenuation in decaying counterflow, suggests that large-scale turbulence in the normal fluid is absent from the two-peak region. We offer a brief discussion of the physical mechanisms that may be responsible for our observations, revealing that G1 velocity fluctuations may be linked to fluctuations of quantized vortex line velocity, and suggest a number of numerical simulations that may reveal the underlying physics in detail.« less
Exploration of thermal counterflow in He II using particle tracking velocimetry
Mastracci, Brian; Guo, Wei
2018-06-22
Flow visualization using particle image velocimetry (PIV) and particularly particle tracking velocimetry (PTV) has been applied to thermal counterflow in He II for nearly two decades now, but the results remain difficult to interpret because tracer particle motion can be influenced by both the normal fluid and superfluid components of He II as well as the quantized vortex tangle. For instance, in one early experiment it was observed (using PTV) that tracer particles move at the normal fluid velocity v n, while in another it was observed (using PIV) that particles move at v n/2. Besides the different visualization methods,more » the range of applied heat flux investigated by these experiments differed by an order of magnitude. To resolve this apparent discrepancy and explore the statistics of particle motion in thermal counterflow, we apply the PTV method to a wide range of heat flux at a number of different fluid temperatures. In our analysis, we introduce a scheme for analyzing the velocity of particles presumably moving with the normal fluid separately from those presumably influenced by the quantized vortex tangle. Our results show that for lower heat flux there are two distinct peaks in the streamwise particle velocity probability density function (PDF), with one centered at the normal fluid velocity v n (named G2 for convenience) while the other is centered near v n/2 (G1). For higher heat flux there is a single peak centered near v n/2 (G3). Using our separation scheme, we show quantitatively that there is no size difference between the particles contributing to G1 and G2. We also show that nonclassical features of the transverse particle velocity PDF arise entirely from G1, while the corresponding PDF for G2 exhibits the classical Gaussian form. The G2 transverse velocity fluctuation, backed up by second sound attenuation in decaying counterflow, suggests that large-scale turbulence in the normal fluid is absent from the two-peak region. We offer a brief discussion of the physical mechanisms that may be responsible for our observations, revealing that G1 velocity fluctuations may be linked to fluctuations of quantized vortex line velocity, and suggest a number of numerical simulations that may reveal the underlying physics in detail.« less
NASA Astrophysics Data System (ADS)
Paukert, M.; Hoose, C.; Simmel, M.
2017-03-01
In model studies of aerosol-dependent immersion freezing in clouds, a common assumption is that each ice nucleating aerosol particle corresponds to exactly one cloud droplet. In contrast, the immersion freezing of larger drops—"rain"—is usually represented by a liquid volume-dependent approach, making the parameterizations of rain freezing independent of specific aerosol types and concentrations. This may lead to inconsistencies when aerosol effects on clouds and precipitation shall be investigated, since raindrops consist of the cloud droplets—and corresponding aerosol particles—that have been involved in drop-drop-collisions. Here we introduce an extension to a two-moment microphysical scheme in order to account explicitly for particle accumulation in raindrops by tracking the rates of selfcollection, autoconversion, and accretion. This provides a direct link between ice nuclei and the primary formation of large precipitating ice particles. A new parameterization scheme of drop freezing is presented to consider multiple ice nuclei within one drop and effective drop cooling rates. In our test cases of deep convective clouds, we find that at altitudes which are most relevant for immersion freezing, the majority of potential ice nuclei have been converted from cloud droplets into raindrops. Compared to the standard treatment of freezing in our model, the less efficient mineral dust-based freezing results in higher rainwater contents in the convective core, affecting both rain and hail precipitation. The aerosol-dependent treatment of rain freezing can reverse the signs of simulated precipitation sensitivities to ice nuclei perturbations.
Montero-Chacón, Francisco; Cifuentes, Héctor; Medina, Fernando
2017-02-21
This work presents a lattice-particle model for the analysis of steel fiber-reinforced concrete (SFRC). In this approach, fibers are explicitly modeled and connected to the concrete matrix lattice via interface elements. The interface behavior was calibrated by means of pullout tests and a range for the bond properties is proposed. The model was validated with analytical and experimental results under uniaxial tension and compression, demonstrating the ability of the model to correctly describe the effect of fiber volume fraction and distribution on fracture properties of SFRC. The lattice-particle model was integrated into a hierarchical homogenization-based scheme in which macroscopic material parameters are obtained from mesoscale simulations. Moreover, a representative volume element (RVE) analysis was carried out and the results shows that such an RVE does exist in the post-peak regime and until localization takes place. Finally, the multiscale upscaling strategy was successfully validated with three-point bending tests.
Montero-Chacón, Francisco; Cifuentes, Héctor; Medina, Fernando
2017-01-01
This work presents a lattice–particle model for the analysis of steel fiber-reinforced concrete (SFRC). In this approach, fibers are explicitly modeled and connected to the concrete matrix lattice via interface elements. The interface behavior was calibrated by means of pullout tests and a range for the bond properties is proposed. The model was validated with analytical and experimental results under uniaxial tension and compression, demonstrating the ability of the model to correctly describe the effect of fiber volume fraction and distribution on fracture properties of SFRC. The lattice–particle model was integrated into a hierarchical homogenization-based scheme in which macroscopic material parameters are obtained from mesoscale simulations. Moreover, a representative volume element (RVE) analysis was carried out and the results shows that such an RVE does exist in the post-peak regime and until localization takes place. Finally, the multiscale upscaling strategy was successfully validated with three-point bending tests. PMID:28772568
NASA Astrophysics Data System (ADS)
Sater, Julien
The theory of Artificial Boundary Conditions described by Antoine et al. [2,4-6] for the Schrodinger equation is applied to the Klein-Gordon (KG) in two-dimensions (2-D) for spinless particles subject to electromagnetic fields. We begin by providing definitions for a basic understanding of the theory of operators, differential geometry and wave front sets needed to discuss the factorization theorem thanks to Nirenberg and Hormander [14, 16]. The laser-free Klein-Gordon equation in 1-D is then discussed, followed by the case including electrodynamics potentials, concluding with the KG equation in 2-D space with electrodynamics potentials. We then consider numerical simulations of the laser-particle KG equation, which includes a brief analysis of a finite difference scheme. The conclusion integrates a discussion of the numerical results, the successful completion of the objective set forth, a declaration of the unanswered encountered questions and a suggestion of subjects for further research.
Importance biasing scheme implemented in the PRIZMA code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kandiev, I.Z.; Malyshkin, G.N.
1997-12-31
PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities.
One-way entangled-photon autocompensating quantum cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walton, Zachary D.; Abouraddy, Ayman F.; Sergienko, Alexander V.
2003-06-01
A quantum cryptography implementation is presented that uses entanglement to combine one-way operation with an autocompensating feature that has hitherto only been available in implementations that require the signal to make a round trip between the users. Using the concept of advanced waves, it is shown that this proposed implementation is related to the round-trip implementation in the same way that Ekert's two-particle scheme is related to the original one-particle scheme of Bennett and Brassard. The practical advantages and disadvantages of the proposed implementation are discussed in the context of existing schemes.
On the effect of galactic outflows in cosmological simulations of disc galaxies
NASA Astrophysics Data System (ADS)
Valentini, Milena; Murante, Giuseppe; Borgani, Stefano; Monaco, Pierluigi; Bressan, Alessandro; Beck, Alexander M.
2017-09-01
We investigate the impact of galactic outflow modelling on the formation and evolution of a disc galaxy, by performing a suite of cosmological simulations with zoomed-in initial conditions (ICs) of a Milky Way-sized halo. We verify how sensitive the general properties of the simulated galaxy are to the way in which stellar feedback triggered outflows are implemented, keeping ICs, simulation code and star formation (SF) model all fixed. We present simulations that are based on a version of the gadget3 code where our sub-resolution model is coupled with an advanced implementation of smoothed particle hydrodynamics that ensures a more accurate fluid sampling and an improved description of gas mixing and hydrodynamical instabilities. We quantify the strong interplay between the adopted hydrodynamic scheme and the sub-resolution model describing SF and feedback. We consider four different galactic outflow models, including the one introduced by Dalla Vecchia & Schaye (2012) and a scheme that is inspired by the Springel & Hernquist (2003) model. We find that the sub-resolution prescriptions adopted to generate galactic outflows are the main shaping factor of the stellar disc component at low redshift. The key requirement that a feedback model must have to be successful in producing a disc-dominated galaxy is the ability to regulate the high-redshift SF (responsible for the formation of the bulge component), the cosmological infall of gas from the large-scale environment, and gas fall-back within the galactic radius at low redshift, in order to avoid a too high SF rate at z = 0.
Evaluation of a new microphysical aerosol module in the ECMWF Integrated Forecasting System
NASA Astrophysics Data System (ADS)
Woodhouse, Matthew; Mann, Graham; Carslaw, Ken; Morcrette, Jean-Jacques; Schulz, Michael; Kinne, Stefan; Boucher, Olivier
2013-04-01
The Monitoring Atmospheric Composition and Climate II (MACC-II) project will provide a system for monitoring and predicting atmospheric composition. As part of the first phase of MACC, the GLOMAP-mode microphysical aerosol scheme (Mann et al., 2010, GMD) was incorporated within the ECMWF Integrated Forecasting System (IFS). The two-moment modal GLOMAP-mode scheme includes new particle formation, condensation, coagulation, cloud-processing, and wet and dry deposition. GLOMAP-mode is already incorporated as a module within the TOMCAT chemistry transport model and within the UK Met Office HadGEM3 general circulation model. The microphysical, process-based GLOMAP-mode scheme allows an improved representation of aerosol size and composition and can simulate aerosol evolution in the troposphere and stratosphere. The new aerosol forecasting and re-analysis system (known as IFS-GLOMAP) will also provide improved boundary conditions for regional air quality forecasts, and will benefit from assimilation of observed aerosol optical depths in near real time. Presented here is an evaluation of the performance of the IFS-GLOMAP system in comparison to in situ aerosol mass and number measurements, and remotely-sensed aerosol optical depth measurements. Future development will provide a fully-coupled chemistry-aerosol scheme, and the capability to resolve nitrate aerosol.
NASA Astrophysics Data System (ADS)
Guimberteau, M.; Ducharne, A.; Ciais, P.; Boisier, J. P.; Peng, S.; De Weirdt, M.; Verbeeck, H.
2014-06-01
This study analyzes the performance of the two soil hydrology schemes of the land surface model ORCHIDEE in estimating Amazonian hydrology and phenology for five major sub-basins (Xingu, Tapajós, Madeira, Solimões and Negro), during the 29-year period 1980-2008. A simple 2-layer scheme with a bucket topped by an evaporative layer is compared to an 11-layer diffusion scheme. The soil schemes are coupled with a river routing module and a process model of plant physiology, phenology and carbon dynamics. The simulated water budget and vegetation functioning components are compared with several data sets at sub-basin scale. The use of the 11-layer soil diffusion scheme does not significantly change the Amazonian water budget simulation when compared to the 2-layer soil scheme (+3.1 and -3.0% in evapotranspiration and river discharge, respectively). However, the higher water-holding capacity of the soil and the physically based representation of runoff and drainage in the 11-layer soil diffusion scheme result in more dynamic soil water storage variation and improved simulation of the total terrestrial water storage when compared to GRACE satellite estimates. The greater soil water storage within the 11-layer scheme also results in increased dry-season evapotranspiration (+0.5 mm d-1, +17%) and improves river discharge simulation in the southeastern sub-basins such as the Xingu. Evapotranspiration over this sub-basin is sustained during the whole dry season with the 11-layer soil diffusion scheme, whereas the 2-layer scheme limits it after only 2 dry months. Lower plant drought stress simulated by the 11-layer soil diffusion scheme leads to better simulation of the seasonal cycle of photosynthesis (GPP) when compared to a GPP data-driven model based on eddy covariance and satellite greenness measurements. A dry-season length between 4 and 7 months over the entire Amazon Basin is found to be critical in distinguishing differences in hydrological feedbacks between the soil and the vegetation cover simulated by the two soil schemes. On average, the multilayer soil diffusion scheme provides little improvement in simulated hydrology over the wet tropical Amazonian sub-basins, but a more significant improvement is found over the drier sub-basins. The use of a multilayer soil diffusion scheme might become critical for assessments of future hydrological changes, especially in southern regions of the Amazon Basin where longer dry seasons and more severe droughts are expected in the next century.
Computational scheme for pH-dependent binding free energy calculation with explicit solvent.
Lee, Juyong; Miller, Benjamin T; Brooks, Bernard R
2016-01-01
We present a computational scheme to compute the pH-dependence of binding free energy with explicit solvent. Despite the importance of pH, the effect of pH has been generally neglected in binding free energy calculations because of a lack of accurate methods to model it. To address this limitation, we use a constant-pH methodology to obtain a true ensemble of multiple protonation states of a titratable system at a given pH and analyze the ensemble using the Bennett acceptance ratio (BAR) method. The constant pH method is based on the combination of enveloping distribution sampling (EDS) with the Hamiltonian replica exchange method (HREM), which yields an accurate semi-grand canonical ensemble of a titratable system. By considering the free energy change of constraining multiple protonation states to a single state or releasing a single protonation state to multiple states, the pH dependent binding free energy profile can be obtained. We perform benchmark simulations of a host-guest system: cucurbit[7]uril (CB[7]) and benzimidazole (BZ). BZ experiences a large pKa shift upon complex formation. The pH-dependent binding free energy profiles of the benchmark system are obtained with three different long-range interaction calculation schemes: a cutoff, the particle mesh Ewald (PME), and the isotropic periodic sum (IPS) method. Our scheme captures the pH-dependent behavior of binding free energy successfully. Absolute binding free energy values obtained with the PME and IPS methods are consistent, while cutoff method results are off by 2 kcal mol(-1) . We also discuss the characteristics of three long-range interaction calculation methods for constant-pH simulations. © 2015 The Protein Society.
A quantum-mechanics molecular-mechanics scheme for extended systems
NASA Astrophysics Data System (ADS)
Hunt, Diego; Sanchez, Veronica M.; Scherlis, Damián A.
2016-08-01
We introduce and discuss a hybrid quantum-mechanics molecular-mechanics (QM-MM) approach for Car-Parrinello DFT simulations with pseudopotentials and planewaves basis, designed for the treatment of periodic systems. In this implementation the MM atoms are considered as additional QM ions having fractional charges of either sign, which provides conceptual and computational simplicity by exploiting the machinery already existing in planewave codes to deal with electrostatics in periodic boundary conditions. With this strategy, both the QM and MM regions are contained in the same supercell, which determines the periodicity for the whole system. Thus, while this method is not meant to compete with non-periodic QM-MM schemes able to handle extremely large but finite MM regions, it is shown that for periodic systems of a few hundred atoms, our approach provides substantial savings in computational times by treating classically a fraction of the particles. The performance and accuracy of the method is assessed through the study of energetic, structural, and dynamical aspects of the water dimer and of the aqueous bulk phase. Finally, the QM-MM scheme is applied to the computation of the vibrational spectra of water layers adsorbed at the TiO2 anatase (1 0 1) solid-liquid interface. This investigation suggests that the inclusion of a second monolayer of H2O molecules is sufficient to induce on the first adsorbed layer, a vibrational dynamics similar to that taking place in the presence of an aqueous environment. The present QM-MM scheme appears as a very interesting tool to efficiently perform molecular dynamics simulations of complex condensed matter systems, from solutions to nanoconfined fluids to different kind of interfaces.
A quantum-mechanics molecular-mechanics scheme for extended systems.
Hunt, Diego; Sanchez, Veronica M; Scherlis, Damián A
2016-08-24
We introduce and discuss a hybrid quantum-mechanics molecular-mechanics (QM-MM) approach for Car-Parrinello DFT simulations with pseudopotentials and planewaves basis, designed for the treatment of periodic systems. In this implementation the MM atoms are considered as additional QM ions having fractional charges of either sign, which provides conceptual and computational simplicity by exploiting the machinery already existing in planewave codes to deal with electrostatics in periodic boundary conditions. With this strategy, both the QM and MM regions are contained in the same supercell, which determines the periodicity for the whole system. Thus, while this method is not meant to compete with non-periodic QM-MM schemes able to handle extremely large but finite MM regions, it is shown that for periodic systems of a few hundred atoms, our approach provides substantial savings in computational times by treating classically a fraction of the particles. The performance and accuracy of the method is assessed through the study of energetic, structural, and dynamical aspects of the water dimer and of the aqueous bulk phase. Finally, the QM-MM scheme is applied to the computation of the vibrational spectra of water layers adsorbed at the TiO2 anatase (1 0 1) solid-liquid interface. This investigation suggests that the inclusion of a second monolayer of H2O molecules is sufficient to induce on the first adsorbed layer, a vibrational dynamics similar to that taking place in the presence of an aqueous environment. The present QM-MM scheme appears as a very interesting tool to efficiently perform molecular dynamics simulations of complex condensed matter systems, from solutions to nanoconfined fluids to different kind of interfaces.
Reduced 3d modeling on injection schemes for laser wakefield acceleration at plasma scale lengths
NASA Astrophysics Data System (ADS)
Helm, Anton; Vieira, Jorge; Silva, Luis; Fonseca, Ricardo
2017-10-01
Current modelling techniques for laser wakefield acceleration (LWFA) are based on particle-in-cell (PIC) codes which are computationally demanding. In PIC simulations the laser wavelength λ0, in μm-range, has to be resolved over the acceleration lengths in meter-range. A promising approach is the ponderomotive guiding center solver (PGC) by only considering the laser envelope for laser pulse propagation. Therefore only the plasma skin depth λp has to be resolved, leading to speedups of (λp /λ0) 2. This allows to perform a wide-range of parameter studies and use it for λ0 <<λp studies. We present the 3d version of a PGC solver in the massively parallel, fully relativistic PIC code OSIRIS. Further, a discussion and characterization of the validity of the PGC solver for injection schemes on the plasma scale lengths, such as down-ramp injection, magnetic injection and ionization injection, through parametric studies, full PIC simulations and theoretical scaling, is presented. This work was partially supported by Fundacao para a Ciencia e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014 and PD/BD/105882/2014.
Massive parallel 3D PIC simulation of negative ion extraction
NASA Astrophysics Data System (ADS)
Revel, Adrien; Mochalskyy, Serhiy; Montellano, Ivar Mauricio; Wünderlich, Dirk; Fantz, Ursel; Minea, Tiberiu
2017-09-01
The 3D PIC-MCC code ONIX is dedicated to modeling Negative hydrogen/deuterium Ion (NI) extraction and co-extraction of electrons from radio-frequency driven, low pressure plasma sources. It provides valuable insight on the complex phenomena involved in the extraction process. In previous calculations, a mesh size larger than the Debye length was used, implying numerical electron heating. Important steps have been achieved in terms of computation performance and parallelization efficiency allowing successful massive parallel calculations (4096 cores), imperative to resolve the Debye length. In addition, the numerical algorithms have been improved in terms of grid treatment, i.e., the electric field near the complex geometry boundaries (plasma grid) is calculated more accurately. The revised model preserves the full 3D treatment, but can take advantage of a highly refined mesh. ONIX was used to investigate the role of the mesh size, the re-injection scheme for lost particles (extracted or wall absorbed), and the electron thermalization process on the calculated extracted current and plasma characteristics. It is demonstrated that all numerical schemes give the same NI current distribution for extracted ions. Concerning the electrons, the pair-injection technique is found well-adapted to simulate the sheath in front of the plasma grid.
Simulating galactic dust grain evolution on a moving mesh
NASA Astrophysics Data System (ADS)
McKinnon, Ryan; Vogelsberger, Mark; Torrey, Paul; Marinacci, Federico; Kannan, Rahul
2018-05-01
Interstellar dust is an important component of the galactic ecosystem, playing a key role in multiple galaxy formation processes. We present a novel numerical framework for the dynamics and size evolution of dust grains implemented in the moving-mesh hydrodynamics code AREPO suited for cosmological galaxy formation simulations. We employ a particle-based method for dust subject to dynamical forces including drag and gravity. The drag force is implemented using a second-order semi-implicit integrator and validated using several dust-hydrodynamical test problems. Each dust particle has a grain size distribution, describing the local abundance of grains of different sizes. The grain size distribution is discretised with a second-order piecewise linear method and evolves in time according to various dust physical processes, including accretion, sputtering, shattering, and coagulation. We present a novel scheme for stochastically forming dust during stellar evolution and new methods for sub-cycling of dust physics time-steps. Using this model, we simulate an isolated disc galaxy to study the impact of dust physical processes that shape the interstellar grain size distribution. We demonstrate, for example, how dust shattering shifts the grain size distribution to smaller sizes resulting in a significant rise of radiation extinction from optical to near-ultraviolet wavelengths. Our framework for simulating dust and gas mixtures can readily be extended to account for other dynamical processes relevant in galaxy formation, like magnetohydrodynamics, radiation pressure, and thermo-chemical processes.
NASA Astrophysics Data System (ADS)
Fiore, Andrew M.; Swan, James W.
2018-01-01
Brownian Dynamics simulations are an important tool for modeling the dynamics of soft matter. However, accurate and rapid computations of the hydrodynamic interactions between suspended, microscopic components in a soft material are a significant computational challenge. Here, we present a new method for Brownian dynamics simulations of suspended colloidal scale particles such as colloids, polymers, surfactants, and proteins subject to a particular and important class of hydrodynamic constraints. The total computational cost of the algorithm is practically linear with the number of particles modeled and can be further optimized when the characteristic mass fractal dimension of the suspended particles is known. Specifically, we consider the so-called "stresslet" constraint for which suspended particles resist local deformation. This acts to produce a symmetric force dipole in the fluid and imparts rigidity to the particles. The presented method is an extension of the recently reported positively split formulation for Ewald summation of the Rotne-Prager-Yamakawa mobility tensor to higher order terms in the hydrodynamic scattering series accounting for force dipoles [A. M. Fiore et al., J. Chem. Phys. 146(12), 124116 (2017)]. The hydrodynamic mobility tensor, which is proportional to the covariance of particle Brownian displacements, is constructed as an Ewald sum in a novel way which guarantees that the real-space and wave-space contributions to the sum are independently symmetric and positive-definite for all possible particle configurations. This property of the Ewald sum is leveraged to rapidly sample the Brownian displacements from a superposition of statistically independent processes with the wave-space and real-space contributions as respective covariances. The cost of computing the Brownian displacements in this way is comparable to the cost of computing the deterministic displacements. The addition of a stresslet constraint to the over-damped particle equations of motion leads to a stochastic differential algebraic equation (SDAE) of index 1, which is integrated forward in time using a mid-point integration scheme that implicitly produces stochastic displacements consistent with the fluctuation-dissipation theorem for the constrained system. Calculations for hard sphere dispersions are illustrated and used to explore the performance of the algorithm. An open source, high-performance implementation on graphics processing units capable of dynamic simulations of millions of particles and integrated with the software package HOOMD-blue is used for benchmarking and made freely available in the supplementary material (ftp://ftp.aip.org/epaps/journ_chem_phys/E-JCPSA6-148-012805)
Parallel discrete-event simulation schemes with heterogeneous processing elements.
Kim, Yup; Kwon, Ikhyun; Chae, Huiseung; Yook, Soon-Hyung
2014-07-01
To understand the effects of nonidentical processing elements (PEs) on parallel discrete-event simulation (PDES) schemes, two stochastic growth models, the restricted solid-on-solid (RSOS) model and the Family model, are investigated by simulations. The RSOS model is the model for the PDES scheme governed by the Kardar-Parisi-Zhang equation (KPZ scheme). The Family model is the model for the scheme governed by the Edwards-Wilkinson equation (EW scheme). Two kinds of distributions for nonidentical PEs are considered. In the first kind computing capacities of PEs are not much different, whereas in the second kind the capacities are extremely widespread. The KPZ scheme on the complex networks shows the synchronizability and scalability regardless of the kinds of PEs. The EW scheme never shows the synchronizability for the random configuration of PEs of the first kind. However, by regularizing the arrangement of PEs of the first kind, the EW scheme is made to show the synchronizability. In contrast, EW scheme never shows the synchronizability for any configuration of PEs of the second kind.
Detecting Aerosol Effect on Deep Precipitation Systems: A Modeling Study
NASA Astrophysics Data System (ADS)
Li, X.; Tao, W.; Khain, A.; Kummerow, C.; Simpson, J.
2006-05-01
Urban cities produce high concentrations of anthropogenic aerosols. These aerosols are generally hygroscopic and may serve as Cloud Condensation Nuclei (CCN). This study focuses on the aerosol indirect effect on the deep convective systems over the land. These deep convective systems contribute to the majority of the summer time rainfall and are important for local hydrological cycle and weather forecast. In a companion presentation (Tao et al.) in this session, the mechanisms of aerosol-cloud-precipitation interactions in deep convective systems are explored using cloud-resolving model simulations. Here these model results will be analyzed to provide guidance to the detection of the impact of aerosols as CCN on summer time, deep convections using the currently available observation methods. The two-dimensional Goddard Cumulus Ensemble (GCE) model with an explicit microphysical scheme has been used to simulate the aerosol effect on deep precipitation systems. This model simulates the size distributions of aerosol particles, as well as cloud, rain, ice crystals, snow, graupel, and hail explicitly. Two case studies are analyzed: a midlatitude summer time squall in Oklahoma, and a sea breeze convection in Florida. It is shown that increasing the CCN number concentration does not affect the rainfall structure and rain duration in these two cases. The total surface rainfall rate is reduced in the squall case, but remains essentially the same in the sea breeze case. For the long-lived squall system with a significant portion of the stratiform rain, the surface rainfall PDF (probability density function) distribution is more sensitive to the change of the initial CCN concentrations compared with the total surface rainfall. The possibility of detecting the aerosol indirect effect in deep precipitation systems from the space is also studied in this presentation. The hydrometeors fields from the GCE model simulations are used as inputs to a microwave radiative transfer model. It is found that Tb at higher frequencies (35 GHz and 85 GHz) are quite sensitive to the CCN concentration variations. This is because the higher frequency brightness temperatures are sensitive to large, ice-phase particles. In a clean environment, the deep convections produce larger cloud particles. When these cloud particles are transported above the freezing level by strong updrafts, they form larger precipitable ice particles (snow, graupel and hail) compared with dirty environment simulations. These larger ice particles result in significantly colder brightness temperatures at high frequencies in the clean scenario simulations.
NASA Astrophysics Data System (ADS)
Maiti, Amitesh; McGrother, Simon
2004-01-01
Dissipative particle dynamics (DPD) is a mesoscale modeling method for simulating equilibrium and dynamical properties of polymers in solution. The basic idea has been around for several decades in the form of bead-spring models. A few years ago, Groot and Warren [J. Chem. Phys. 107, 4423 (1997)] established an important link between DPD and the Flory-Huggins χ-parameter theory for polymer solutions. We revisit the Groot-Warren theory and investigate the DPD interaction parameters as a function of bead size. In particular, we show a consistent scheme of computing the interfacial tension in a segregated binary mixture. Results for three systems chosen for illustration are in excellent agreement with experimental results. This opens the door for determining DPD interactions using interfacial tension as a fitting parameter.
Experimental realization of underdense plasma photocathode wakefield acceleration at FACET
NASA Astrophysics Data System (ADS)
Scherkl, Paul
2017-10-01
Novel electron beam sources from compact plasma accelerator concepts currently mature into the driving technology for next generation high-energy physics and light source facilities. Particularly electron beams of ultra-high brightness could pave the way for major advances for both scientific and commercial applications, but their generation remains tremendously challenging. The presentation outlines the experimental demonstration of the world's first bright electron beam source from spatiotemporally synchronized laser pulses injecting electrons into particle-driven plasma wakefields at FACET. Two distinctive types of operation - laser-triggered density downramp injection (``Plasma Torch'') and underdense plasma photocathode acceleration (``Trojan Horse'') - and their intermediate transitions are characterized and contrasted. Extensive particle-in-cell simulations substantiate the presentation of experimental results. In combination with novel techniques to minimize the beam energy spread, the acceleration scheme presented here promises ultra-high beam quality and brightness.
NASA Astrophysics Data System (ADS)
Tariku, Tebikachew Betru; Gan, Thian Yew
2018-06-01
Regional climate models (RCMs) have been used to simulate rainfall at relatively high spatial and temporal resolutions useful for sustainable water resources planning, design and management. In this study, the sensitivity of the RCM, weather research and forecasting (WRF), in modeling the regional climate of the Nile River Basin (NRB) was investigated using 31 combinations of different physical parameterization schemes which include cumulus (Cu), microphysics (MP), planetary boundary layer (PBL), land-surface model (LSM) and radiation (Ra) schemes. Using the European Centre for Medium-Range Weather Forecast (ECMWF) ERA-Interim reanalysis data as initial and lateral boundary conditions, WRF was configured to model the climate of NRB at a resolution of 36 km with 30 vertical levels. The 1999-2001 simulations using WRF were compared with satellite data combined with ground observation and the NCEP reanalysis data for 2 m surface air temperature (T2), rainfall, short- and longwave downward radiation at the surface (SWRAD, LWRAD). Overall, WRF simulated more accurate T2 and LWRAD (with correlation coefficients >0.8 and low root-mean-square error) than SWRAD and rainfall for the NRB. Further, the simulation of rainfall is more sensitive to PBL, Cu and MP schemes than other schemes of WRF. For example, WRF simulated less biased rainfall with Kain-Fritsch combined with MYJ than with YSU as the PBL scheme. The simulation of T2 is more sensitive to LSM and Ra than to Cu, PBL and MP schemes selected, SWRAD is more sensitive to MP and Ra than to Cu, LSM and PBL schemes, and LWRAD is more sensitive to LSM, Ra and PBL than Cu, and MP schemes. In summary, the following combination of schemes simulated the most representative regional climate of NRB: WSM3 microphysics, KF cumulus, MYJ PBL, RRTM longwave radiation and Dudhia shortwave radiation schemes, and Noah LSM. The above configuration of WRF coupled to the Noah LSM has also been shown to simulate representative regional climate of NRB over 1980-2001 which include a combination of wet and dry years of the NRB.
NASA Astrophysics Data System (ADS)
Tariku, Tebikachew Betru; Gan, Thian Yew
2017-08-01
Regional climate models (RCMs) have been used to simulate rainfall at relatively high spatial and temporal resolutions useful for sustainable water resources planning, design and management. In this study, the sensitivity of the RCM, weather research and forecasting (WRF), in modeling the regional climate of the Nile River Basin (NRB) was investigated using 31 combinations of different physical parameterization schemes which include cumulus (Cu), microphysics (MP), planetary boundary layer (PBL), land-surface model (LSM) and radiation (Ra) schemes. Using the European Centre for Medium-Range Weather Forecast (ECMWF) ERA-Interim reanalysis data as initial and lateral boundary conditions, WRF was configured to model the climate of NRB at a resolution of 36 km with 30 vertical levels. The 1999-2001 simulations using WRF were compared with satellite data combined with ground observation and the NCEP reanalysis data for 2 m surface air temperature (T2), rainfall, short- and longwave downward radiation at the surface (SWRAD, LWRAD). Overall, WRF simulated more accurate T2 and LWRAD (with correlation coefficients >0.8 and low root-mean-square error) than SWRAD and rainfall for the NRB. Further, the simulation of rainfall is more sensitive to PBL, Cu and MP schemes than other schemes of WRF. For example, WRF simulated less biased rainfall with Kain-Fritsch combined with MYJ than with YSU as the PBL scheme. The simulation of T2 is more sensitive to LSM and Ra than to Cu, PBL and MP schemes selected, SWRAD is more sensitive to MP and Ra than to Cu, LSM and PBL schemes, and LWRAD is more sensitive to LSM, Ra and PBL than Cu, and MP schemes. In summary, the following combination of schemes simulated the most representative regional climate of NRB: WSM3 microphysics, KF cumulus, MYJ PBL, RRTM longwave radiation and Dudhia shortwave radiation schemes, and Noah LSM. The above configuration of WRF coupled to the Noah LSM has also been shown to simulate representative regional climate of NRB over 1980-2001 which include a combination of wet and dry years of the NRB.
Sanjeevi, Sathish K P; Zarghami, Ahad; Padding, Johan T
2018-04-01
Various curved no-slip boundary conditions available in literature improve the accuracy of lattice Boltzmann simulations compared to the traditional staircase approximation of curved geometries. Usually, the required unknown distribution functions emerging from the solid nodes are computed based on the known distribution functions using interpolation or extrapolation schemes. On using such curved boundary schemes, there will be mass loss or gain at each time step during the simulations, especially apparent at high Reynolds numbers, which is called mass leakage. Such an issue becomes severe in periodic flows, where the mass leakage accumulation would affect the computed flow fields over time. In this paper, we examine mass leakage of the most well-known curved boundary treatments for high-Reynolds-number flows. Apart from the existing schemes, we also test different forced mass conservation schemes and a constant density scheme. The capability of each scheme is investigated and, finally, recommendations for choosing a proper boundary condition scheme are given for stable and accurate simulations.
On the properties of energy stable flux reconstruction schemes for implicit large eddy simulation
NASA Astrophysics Data System (ADS)
Vermeire, B. C.; Vincent, P. E.
2016-12-01
We begin by investigating the stability, order of accuracy, and dispersion and dissipation characteristics of the extended range of energy stable flux reconstruction (E-ESFR) schemes in the context of implicit large eddy simulation (ILES). We proceed to demonstrate that subsets of the E-ESFR schemes are more stable than collocation nodal discontinuous Galerkin methods recovered with the flux reconstruction approach (FRDG) for marginally-resolved ILES simulations of the Taylor-Green vortex. These schemes are shown to have reduced dissipation and dispersion errors relative to FRDG schemes of the same polynomial degree and, simultaneously, have increased Courant-Friedrichs-Lewy (CFL) limits. Finally, we simulate turbulent flow over an SD7003 aerofoil using two of the most stable E-ESFR schemes identified by the aforementioned Taylor-Green vortex experiments. Results demonstrate that subsets of E-ESFR schemes appear more stable than the commonly used FRDG method, have increased CFL limits, and are suitable for ILES of complex turbulent flows on unstructured grids.
NASA Astrophysics Data System (ADS)
Sanjeevi, Sathish K. P.; Zarghami, Ahad; Padding, Johan T.
2018-04-01
Various curved no-slip boundary conditions available in literature improve the accuracy of lattice Boltzmann simulations compared to the traditional staircase approximation of curved geometries. Usually, the required unknown distribution functions emerging from the solid nodes are computed based on the known distribution functions using interpolation or extrapolation schemes. On using such curved boundary schemes, there will be mass loss or gain at each time step during the simulations, especially apparent at high Reynolds numbers, which is called mass leakage. Such an issue becomes severe in periodic flows, where the mass leakage accumulation would affect the computed flow fields over time. In this paper, we examine mass leakage of the most well-known curved boundary treatments for high-Reynolds-number flows. Apart from the existing schemes, we also test different forced mass conservation schemes and a constant density scheme. The capability of each scheme is investigated and, finally, recommendations for choosing a proper boundary condition scheme are given for stable and accurate simulations.
Investigation of the particle-core structure of odd-mass nuclei in the NpNn scheme
NASA Astrophysics Data System (ADS)
Bucurescu, D.; Cata, G.; Cutoiu, D.; Dragulescu, E.; Ivasu, M.; Zamfir, N. V.; Gizon, A.; Gizon, J.
1989-10-01
The NpNn scheme is applied to data related to collective band structures determined by the unique parity shell model orbitals in odd-A nuclei from the mass regions A≌80-100 and A≌130. Simple systematics are obtained which give a synthetic picture of the evolution of the particle-core coupling in these nuclear regions.
SU-C-BRC-06: OpenCL-Based Cross-Platform Monte Carlo Simulation Package for Carbon Ion Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, N; Tian, Z; Pompos, A
2016-06-15
Purpose: Monte Carlo (MC) simulation is considered to be the most accurate method for calculation of absorbed dose and fundamental physical quantities related to biological effects in carbon ion therapy. Its long computation time impedes clinical and research applications. We have developed an MC package, goCMC, on parallel processing platforms, aiming at achieving accurate and efficient simulations for carbon therapy. Methods: goCMC was developed under OpenCL framework. It supported transport simulation in voxelized geometry with kinetic energy up to 450 MeV/u. Class II condensed history algorithm was employed for charged particle transport with stopping power computed via Bethe-Bloch equation. Secondarymore » electrons were not transported with their energy locally deposited. Energy straggling and multiple scattering were modeled. Production of secondary charged particles from nuclear interactions was implemented based on cross section and yield data from Geant4. They were transported via the condensed history scheme. goCMC supported scoring various quantities of interest e.g. physical dose, particle fluence, spectrum, linear energy transfer, and positron emitting nuclei. Results: goCMC has been benchmarked against Geant4 with different phantoms and beam energies. For 100 MeV/u, 250 MeV/u and 400 MeV/u beams impinging to a water phantom, range difference was 0.03 mm, 0.20 mm and 0.53 mm, and mean dose difference was 0.47%, 0.72% and 0.79%, respectively. goCMC can run on various computing devices. Depending on the beam energy and voxel size, it took 20∼100 seconds to simulate 10{sup 7} carbons on an AMD Radeon GPU card. The corresponding CPU time for Geant4 with the same setup was 60∼100 hours. Conclusion: We have developed an OpenCL-based cross-platform carbon MC simulation package, goCMC. Its accuracy, efficiency and portability make goCMC attractive for research and clinical applications in carbon therapy.« less
NASA Astrophysics Data System (ADS)
Aumont, B.; Camredon, M.; Isaacman-VanWertz, G. A.; Karam, C.; Valorso, R.; Madronich, S.; Kroll, J. H.
2016-12-01
Gas phase oxidation of VOC is a gradual process leading to the formation of multifunctional organic compounds, i.e., typically species with higher oxidation state, high water solubility and low volatility. These species contribute to the formation of secondary organic aerosols (SOA) viamultiphase processes involving a myriad of organic species that evolve through thousands of reactions and gas/particle mass exchanges. Explicit chemical mechanisms reflect the understanding of these multigenerational oxidation steps. These mechanisms rely directly on elementary reactions to describe the chemical evolution and track the identity of organic carbon through various phases down to ultimate oxidation products. The development, assessment and improvement of such explicit schemes is a key issue, as major uncertainties remain on the chemical pathways involved during atmospheric oxidation of organic matter. An array of mass spectrometric techniques (CIMS, PTRMS, AMS) was recently used to track the composition of organic species during α-pinene oxidation in the MIT environmental chamber, providing an experimental database to evaluate and improve explicit mechanisms. In this study, the GECKO-A tool (Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere) is used to generate fully explicit oxidation schemes for α-pinene multiphase oxidation simulating the MIT experiment. The ability of the GECKO-A chemical scheme to explain the organic molecular composition in the gas and the condensed phases is explored. First results of this model/observation comparison at the molecular level will be presented.
NASA Astrophysics Data System (ADS)
Wu, Hui-Chun; Sheng, Zheng-Ming; Zhang, Jie
2008-04-01
We propose a scheme to generate single-cycle powerful terahertz (THz) pulses by ultrashort intense laser pulses obliquely incident on an underdense plasma slab of a few THz wavelengths in thickness. THz waves are radiated from a transient net current driven by the laser ponderomotive force in the plasma slab. Analysis and particle-in-cell simulations show that such a THz source is capable of providing power of megawatts to gigawatts, field strength of MV/cm-GV/cm, and broad tunability range, which is potentially useful for nonlinear and high-field THz science and applications.
A Concept for Power Cycling the Electronics of CALICE-AHCAL with the Train Structure of ILC
NASA Astrophysics Data System (ADS)
Göottlicher, Peter; The Calice-Collaboration
Particle flow algorithm calorimetry requires high granularity three-dimensional readout. The tight power requirement of 40 μW/channel is reached by enabling readout ASIC currents only during beam delivery, corresponding to a 1% duty cycle. EMI noise caused by current switching needs to be minimized by the power system and this paper presents ideas, simulations and first measurements for minimizing disturbances. A carefully design of circuits, printed circuit boards, grounding scheme and use of floating supplies allows current loops to be closed locally, stabilized voltages and minimal currents in the metal structures.
Parametric Study of Decay of Homogeneous Isotropic Turbulence Using Large Eddy Simulation
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Rumsey, Christopher L.; Rubinstein, Robert; Balakumar, Ponnampalam; Zang, Thomas A.
2012-01-01
Numerical simulations of decaying homogeneous isotropic turbulence are performed with both low-order and high-order spatial discretization schemes. The turbulent Mach and Reynolds numbers for the simulations are 0.2 and 250, respectively. For the low-order schemes we use either second-order central or third-order upwind biased differencing. For higher order approximations we apply weighted essentially non-oscillatory (WENO) schemes, both with linear and nonlinear weights. There are two objectives in this preliminary effort to investigate possible schemes for large eddy simulation (LES). One is to explore the capability of a widely used low-order computational fluid dynamics (CFD) code to perform LES computations. The other is to determine the effect of higher order accuracy (fifth, seventh, and ninth order) achieved with high-order upwind biased WENO-based schemes. Turbulence statistics, such as kinetic energy, dissipation, and skewness, along with the energy spectra from simulations of the decaying turbulence problem are used to assess and compare the various numerical schemes. In addition, results from the best performing schemes are compared with those from a spectral scheme. The effects of grid density, ranging from 32 cubed to 192 cubed, on the computations are also examined. The fifth-order WENO-based scheme is found to be too dissipative, especially on the coarser grids. However, with the seventh-order and ninth-order WENO-based schemes we observe a significant improvement in accuracy relative to the lower order LES schemes, as revealed by the computed peak in the energy dissipation and by the energy spectrum.