Simulation of deterministic energy-balance particle agglomeration in turbulent liquid-solid flows
NASA Astrophysics Data System (ADS)
Njobuenwu, Derrick O.; Fairweather, Michael
2017-08-01
An efficient technique to simulate turbulent particle-laden flow at high mass loadings within the four-way coupled simulation regime is presented. The technique implements large-eddy simulation, discrete particle simulation, a deterministic treatment of inter-particle collisions, and an energy-balanced particle agglomeration model. The algorithm to detect inter-particle collisions is such that the computational costs scale linearly with the number of particles present in the computational domain. On detection of a collision, particle agglomeration is tested based on the pre-collision kinetic energy, restitution coefficient, and van der Waals' interactions. The performance of the technique developed is tested by performing parametric studies on the influence of the restitution coefficient (en = 0.2, 0.4, 0.6, and 0.8), particle size (dp = 60, 120, 200, and 316 μm), Reynolds number (Reτ = 150, 300, and 590), and particle concentration (αp = 5.0 × 10-4, 1.0 × 10-3, and 5.0 × 10-3) on particle-particle interaction events (collision and agglomeration). The results demonstrate that the collision frequency shows a linear dependency on the restitution coefficient, while the agglomeration rate shows an inverse dependence. Collisions among smaller particles are more frequent and efficient in forming agglomerates than those of coarser particles. The particle-particle interaction events show a strong dependency on the shear Reynolds number Reτ, while increasing the particle concentration effectively enhances particle collision and agglomeration whilst having only a minor influence on the agglomeration rate. Overall, the sensitivity of the particle-particle interaction events to the selected simulation parameters is found to influence the population and distribution of the primary particles and agglomerates formed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.
Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope ofmore » a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.« less
Monte Carlo simulations of particle acceleration at oblique shocks: Including cross-field diffusion
NASA Technical Reports Server (NTRS)
Baring, M. G.; Ellison, D. C.; Jones, F. C.
1995-01-01
The Monte Carlo technique of simulating diffusive particle acceleration at shocks has made spectral predictions that compare extremely well with particle distributions observed at the quasi-parallel region of the earth's bow shock. The current extension of this work to compare simulation predictions with particle spectra at oblique interplanetary shocks has required the inclusion of significant cross-field diffusion (strong scattering) in the simulation technique, since oblique shocks are intrinsically inefficient in the limit of weak scattering. In this paper, we present results from the method we have developed for the inclusion of cross-field diffusion in our simulations, namely model predictions of particle spectra downstream of oblique subluminal shocks. While the high-energy spectral index is independent of the shock obliquity and the strength of the scattering, the latter is observed to profoundly influence the efficiency of injection of cosmic rays into the acceleration process.
NASA Astrophysics Data System (ADS)
Petsev, Nikolai D.; Leal, L. Gary; Shell, M. Scott
2017-12-01
Hybrid molecular-continuum simulation techniques afford a number of advantages for problems in the rapidly burgeoning area of nanoscale engineering and technology, though they are typically quite complex to implement and limited to single-component fluid systems. We describe an approach for modeling multicomponent hydrodynamic problems spanning multiple length scales when using particle-based descriptions for both the finely resolved (e.g., molecular dynamics) and coarse-grained (e.g., continuum) subregions within an overall simulation domain. This technique is based on the multiscale methodology previously developed for mesoscale binary fluids [N. D. Petsev, L. G. Leal, and M. S. Shell, J. Chem. Phys. 144, 084115 (2016)], simulated using a particle-based continuum method known as smoothed dissipative particle dynamics. An important application of this approach is the ability to perform coupled molecular dynamics (MD) and continuum modeling of molecularly miscible binary mixtures. In order to validate this technique, we investigate multicomponent hybrid MD-continuum simulations at equilibrium, as well as non-equilibrium cases featuring concentration gradients.
NASA Astrophysics Data System (ADS)
Gulliver, Eric A.
The objective of this thesis to identify and develop techniques providing direct comparison between simulated and real packed particle mixture microstructures containing submicron-sized particles. This entailed devising techniques for simulating powder mixtures, producing real mixtures with known powder characteristics, sectioning real mixtures, interrogating mixture cross-sections, evaluating and quantifying the mixture interrogation process and for comparing interrogation results between mixtures. A drop and roll-type particle-packing model was used to generate simulations of random mixtures. The simulated mixtures were then evaluated to establish that they were not segregated and free from gross defects. A powder processing protocol was established to provide real mixtures for direct comparison and for use in evaluating the simulation. The powder processing protocol was designed to minimize differences between measured particle size distributions and the particle size distributions in the mixture. A sectioning technique was developed that was capable of producing distortion free cross-sections of fine scale particulate mixtures. Tessellation analysis was used to interrogate mixture cross sections and statistical quality control charts were used to evaluate different types of tessellation analysis and to establish the importance of differences between simulated and real mixtures. The particle-packing program generated crescent shaped pores below large particles but realistic looking mixture microstructures otherwise. Focused ion beam milling was the only technique capable of sectioning particle compacts in a manner suitable for stereological analysis. Johnson-Mehl and Voronoi tessellation of the same cross-sections produced tessellation tiles with different the-area populations. Control charts analysis showed Johnson-Mehl tessellation measurements are superior to Voronoi tessellation measurements for detecting variations in mixture microstructure, such as altered particle-size distributions or mixture composition. Control charts based on tessellation measurements were used for direct, quantitative comparisons between real and simulated mixtures. Four sets of simulated and real mixtures were examined. Data from real mixture was matched with simulated data when the samples were well mixed and the particle size distributions and volume fractions of the components were identical. Analysis of mixture components that occupied less than approximately 10 vol% of the mixture was not practical unless the particle size of the component was extremely small and excellent quality high-resolution compositional micrographs of the real sample are available. These methods of analysis should allow future researchers to systematically evaluate and predict the impact and importance of variables such as component volume fraction and component particle size distribution as they pertain to the uniformity of powder mixture microstructures.
Machine learning for autonomous crystal structure identification.
Reinhart, Wesley F; Long, Andrew W; Howard, Michael P; Ferguson, Andrew L; Panagiotopoulos, Athanassios Z
2017-07-21
We present a machine learning technique to discover and distinguish relevant ordered structures from molecular simulation snapshots or particle tracking data. Unlike other popular methods for structural identification, our technique requires no a priori description of the target structures. Instead, we use nonlinear manifold learning to infer structural relationships between particles according to the topology of their local environment. This graph-based approach yields unbiased structural information which allows us to quantify the crystalline character of particles near defects, grain boundaries, and interfaces. We demonstrate the method by classifying particles in a simulation of colloidal crystallization, and show that our method identifies structural features that are missed by standard techniques.
Advanced Techniques for Simulating the Behavior of Sand
NASA Astrophysics Data System (ADS)
Clothier, M.; Bailey, M.
2009-12-01
Computer graphics and visualization techniques continue to provide untapped research opportunities, particularly when working with earth science disciplines. Through collaboration with the Oregon Space Grant and IGERT Ecosystem Informatics programs we are developing new techniques for simulating sand. In addition, through collaboration with the Oregon Space Grant, we’ve been communicating with the Jet Propulsion Laboratory (JPL) to exchange ideas and gain feedback on our work. More specifically, JPL’s DARTS Laboratory specializes in planetary vehicle simulation, such as the Mars rovers. This simulation utilizes a virtual "sand box" to test how planetary rovers respond to different terrains while traversing them. Unfortunately, this simulation is unable to fully mimic the harsh, sandy environments of those found on Mars. Ideally, these simulations should allow a rover to interact with the sand beneath it, particularly for different sand granularities and densities. In particular, there may be situations where a rover may become stuck in sand due to lack of friction between the sand and wheels. In fact, in May 2009, the Spirit rover became stuck in the Martian sand and has provided additional motivation for this research. In order to develop a new sand simulation model, high performance computing will play a very important role in this work. More specifically, graphics processing units (GPUs) are useful due to their ability to run general purpose algorithms and ability to perform massively parallel computations. In prior research, simulating vast quantities of sand has been difficult to compute in real-time due to the computational complexity of many colliding particles. With the use of GPUs however, each particle collision will be parallelized, allowing for a dramatic performance increase. In addition, spatial partitioning will also provide a speed boost as this will help limit the number of particle collision calculations. However, since the goal of this research is to simulate the look and behavior of sand, this work will go beyond simple particle collision. In particular, we can continue to use our parallel algorithms not only on single particles but on particle “clumps” that consist of multiple combined particles. Since sand is typically not spherical in nature, these particle “clumps” help to simulate the coarse nature of sand. In a simulation environment, multiple combined particles could be used to simulate the polygonal and granular nature of sand grains. Thus, a diversity of sand particles can be generated. The interaction between these particles can then be parallelized using GPU hardware. As such, this research will investigate different graphics and physics techniques and determine the tradeoffs in performance and visual quality for sand simulation. An enhanced sand model through the use of high performance computing and GPUs has great potential to impact research for both earth and space scientists. Interaction with JPL has provided an opportunity for us to refine our simulation techniques that can ultimately be used for their vehicle simulator. As an added benefit of this work, advancements in simulating sand can also benefit scientists here on earth, especially in regard to understanding landslides and debris flows.
Py-SPHViewer: Cosmological simulations using Smoothed Particle Hydrodynamics
NASA Astrophysics Data System (ADS)
Benítez-Llambay, Alejandro
2017-12-01
Py-SPHViewer visualizes and explores N-body + Hydrodynamics simulations. The code interpolates the underlying density field (or any other property) traced by a set of particles, using the Smoothed Particle Hydrodynamics (SPH) interpolation scheme, thus producing not only beautiful but also useful scientific images. Py-SPHViewer enables the user to explore simulated volumes using different projections. Py-SPHViewer also provides a natural way to visualize (in a self-consistent fashion) gas dynamical simulations, which use the same technique to compute the interactions between particles.
Verification of Eulerian-Eulerian and Eulerian-Lagrangian simulations for fluid-particle flows
NASA Astrophysics Data System (ADS)
Kong, Bo; Patel, Ravi G.; Capecelatro, Jesse; Desjardins, Olivier; Fox, Rodney O.
2017-11-01
In this work, we study the performance of three simulation techniques for fluid-particle flows: (1) a volume-filtered Euler-Lagrange approach (EL), (2) a quadrature-based moment method using the anisotropic Gaussian closure (AG), and (3) a traditional two-fluid model. By simulating two problems: particles in frozen homogeneous isotropic turbulence (HIT), and cluster-induced turbulence (CIT), the convergence of the methods under grid refinement is found to depend on the simulation method and the specific problem, with CIT simulations facing fewer difficulties than HIT. Although EL converges under refinement for both HIT and CIT, its statistical results exhibit dependence on the techniques used to extract statistics for the particle phase. For HIT, converging both EE methods (TFM and AG) poses challenges, while for CIT, AG and EL produce similar results. Overall, all three methods face challenges when trying to extract converged, parameter-independent statistics due to the presence of shocks in the particle phase. National Science Foundation and National Energy Technology Laboratory.
An efficient and reliable predictive method for fluidized bed simulation
Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen
2017-06-13
In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less
An efficient and reliable predictive method for fluidized bed simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen
2017-06-29
In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less
Application of particle splitting method for both hydrostatic and hydrodynamic cases in SPH
NASA Astrophysics Data System (ADS)
Liu, W. T.; Sun, P. N.; Ming, F. R.; Zhang, A. M.
2018-01-01
Smoothed particle hydrodynamics (SPH) method with numerical diffusive terms shows satisfactory stability and accuracy in some violent fluid-solid interaction problems. However, in most simulations, uniform particle distributions are used and the multi-resolution, which can obviously improve the local accuracy and the overall computational efficiency, has seldom been applied. In this paper, a dynamic particle splitting method is applied and it allows for the simulation of both hydrostatic and hydrodynamic problems. The splitting algorithm is that, when a coarse (mother) particle enters the splitting region, it will be split into four daughter particles, which inherit the physical parameters of the mother particle. In the particle splitting process, conservations of mass, momentum and energy are ensured. Based on the error analysis, the splitting technique is designed to allow the optimal accuracy at the interface between the coarse and refined particles and this is particularly important in the simulation of hydrostatic cases. Finally, the scheme is validated by five basic cases, which demonstrate that the present SPH model with a particle splitting technique is of high accuracy and efficiency and is capable for the simulation of a wide range of hydrodynamic problems.
NASA Astrophysics Data System (ADS)
Chiron, L.; Oger, G.; de Leffe, M.; Le Touzé, D.
2018-02-01
While smoothed-particle hydrodynamics (SPH) simulations are usually performed using uniform particle distributions, local particle refinement techniques have been developed to concentrate fine spatial resolutions in identified areas of interest. Although the formalism of this method is relatively easy to implement, its robustness at coarse/fine interfaces can be problematic. Analysis performed in [16] shows that the radius of refined particles should be greater than half the radius of unrefined particles to ensure robustness. In this article, the basics of an Adaptive Particle Refinement (APR) technique, inspired by AMR in mesh-based methods, are presented. This approach ensures robustness with alleviated constraints. Simulations applying the new formalism proposed achieve accuracy comparable to fully refined spatial resolutions, together with robustness, low CPU times and maintained parallel efficiency.
Höfler, K; Schwarzer, S
2000-06-01
Building on an idea of Fogelson and Peskin [J. Comput. Phys. 79, 50 (1988)] we describe the implementation and verification of a simulation technique for systems of non-Brownian particles in fluids at Reynolds numbers up to about 20 on the particle scale. This direct simulation technique fills a gap between simulations in the viscous regime and high-Reynolds-number modeling. It also combines sufficient computational accuracy with numerical efficiency and allows studies of several thousand, in principle arbitrarily shaped, extended and hydrodynamically interacting particles on regular work stations. We verify the algorithm in two and three dimensions for (i) single falling particles and (ii) a fluid flowing through a bed of fixed spheres. In the context of sedimentation we compute the volume fraction dependence of the mean sedimentation velocity. The results are compared with experimental and other numerical results both in the viscous and inertial regime and we find very satisfactory agreement.
Petsev, Nikolai Dimitrov; Leal, L. Gary; Shell, M. Scott
2017-12-21
Hybrid molecular-continuum simulation techniques afford a number of advantages for problems in the rapidly burgeoning area of nanoscale engineering and technology, though they are typically quite complex to implement and limited to single-component fluid systems. We describe an approach for modeling multicomponent hydrodynamic problems spanning multiple length scales when using particle-based descriptions for both the finely-resolved (e.g. molecular dynamics) and coarse-grained (e.g. continuum) subregions within an overall simulation domain. This technique is based on the multiscale methodology previously developed for mesoscale binary fluids [N. D. Petsev, L. G. Leal, and M. S. Shell, J. Chem. Phys. 144, 84115 (2016)], simulatedmore » using a particle-based continuum method known as smoothed dissipative particle dynamics (SDPD). An important application of this approach is the ability to perform coupled molecular dynamics (MD) and continuum modeling of molecularly miscible binary mixtures. In order to validate this technique, we investigate multicomponent hybrid MD-continuum simulations at equilibrium, as well as non-equilibrium cases featuring concentration gradients.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petsev, Nikolai Dimitrov; Leal, L. Gary; Shell, M. Scott
Hybrid molecular-continuum simulation techniques afford a number of advantages for problems in the rapidly burgeoning area of nanoscale engineering and technology, though they are typically quite complex to implement and limited to single-component fluid systems. We describe an approach for modeling multicomponent hydrodynamic problems spanning multiple length scales when using particle-based descriptions for both the finely-resolved (e.g. molecular dynamics) and coarse-grained (e.g. continuum) subregions within an overall simulation domain. This technique is based on the multiscale methodology previously developed for mesoscale binary fluids [N. D. Petsev, L. G. Leal, and M. S. Shell, J. Chem. Phys. 144, 84115 (2016)], simulatedmore » using a particle-based continuum method known as smoothed dissipative particle dynamics (SDPD). An important application of this approach is the ability to perform coupled molecular dynamics (MD) and continuum modeling of molecularly miscible binary mixtures. In order to validate this technique, we investigate multicomponent hybrid MD-continuum simulations at equilibrium, as well as non-equilibrium cases featuring concentration gradients.« less
Patel, Ravi G.; Desjardins, Olivier; Kong, Bo; ...
2017-09-01
Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, Ravi G.; Desjardins, Olivier; Kong, Bo
Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less
NASA Astrophysics Data System (ADS)
Barnes, Brian C.; Leiter, Kenneth W.; Becker, Richard; Knap, Jaroslaw; Brennan, John K.
2017-07-01
We describe the development, accuracy, and efficiency of an automation package for molecular simulation, the large-scale atomic/molecular massively parallel simulator (LAMMPS) integrated materials engine (LIME). Heuristics and algorithms employed for equation of state (EOS) calculation using a particle-based model of a molecular crystal, hexahydro-1,3,5-trinitro-s-triazine (RDX), are described in detail. The simulation method for the particle-based model is energy-conserving dissipative particle dynamics, but the techniques used in LIME are generally applicable to molecular dynamics simulations with a variety of particle-based models. The newly created tool set is tested through use of its EOS data in plate impact and Taylor anvil impact continuum simulations of solid RDX. The coarse-grain model results from LIME provide an approach to bridge the scales from atomistic simulations to continuum simulations.
Smoothed-particle hydrodynamics and nonequilibrium molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoover, W. G.; Hoover, C. G.
1993-08-01
Gingold, Lucy, and Monaghan invented a grid-free version of continuum mechanics ``smoothed-particle hydrodynamics,`` in 1977. It is a likely contributor to ``hybrid`` simulations combining atomistic and continuum simulations. We describe applications of this particle-based continuum technique from the closely-related standpoint of nonequilibrium molecular dynamics. We compare chaotic Lyapunov spectra for atomistic solids and fluids with those which characterize a two-dimensional smoothed-particle fluid system.
New methods to detect particle velocity and mass flux in arc-heated ablation/erosion facilities
NASA Technical Reports Server (NTRS)
Brayton, D. B.; Bomar, B. W.; Seibel, B. L.; Elrod, P. D.
1980-01-01
Arc-heated flow facilities with injected particles are used to simulate the erosive and ablative/erosive environments encountered by spacecraft re-entry through fog, clouds, thermo-nuclear explosions, etc. Two newly developed particle diagnostic techniques used to calibrate these facilities are discussed. One technique measures particle velocity and is based on the detection of thermal radiation and/or chemiluminescence from the hot seed particles in a model ablation/erosion facility. The second technique measures a local particle rate, which is proportional to local particle mass flux, in a dust erosion facility by photodetecting and counting the interruptions of a focused laser beam by individual particles.
Computer animation challenges for computational fluid dynamics
NASA Astrophysics Data System (ADS)
Vines, Mauricio; Lee, Won-Sook; Mavriplis, Catherine
2012-07-01
Computer animation requirements differ from those of traditional computational fluid dynamics (CFD) investigations in that visual plausibility and rapid frame update rates trump physical accuracy. We present an overview of the main techniques for fluid simulation in computer animation, starting with Eulerian grid approaches, the Lattice Boltzmann method, Fourier transform techniques and Lagrangian particle introduction. Adaptive grid methods, precomputation of results for model reduction, parallelisation and computation on graphical processing units (GPUs) are reviewed in the context of accelerating simulation computations for animation. A survey of current specific approaches for the application of these techniques to the simulation of smoke, fire, water, bubbles, mixing, phase change and solid-fluid coupling is also included. Adding plausibility to results through particle introduction, turbulence detail and concentration on regions of interest by level set techniques has elevated the degree of accuracy and realism of recent animations. Basic approaches are described here. Techniques to control the simulation to produce a desired visual effect are also discussed. Finally, some references to rendering techniques and haptic applications are mentioned to provide the reader with a complete picture of the challenges of simulating fluids in computer animation.
Accelerated simulation of stochastic particle removal processes in particle-resolved aerosol models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, J.H.; Michelotti, M.D.; Riemer, N.
2016-10-01
Stochastic particle-resolved methods have proven useful for simulating multi-dimensional systems such as composition-resolved aerosol size distributions. While particle-resolved methods have substantial benefits for highly detailed simulations, these techniques suffer from high computational cost, motivating efforts to improve their algorithmic efficiency. Here we formulate an algorithm for accelerating particle removal processes by aggregating particles of similar size into bins. We present the Binned Algorithm for particle removal processes and analyze its performance with application to the atmospherically relevant process of aerosol dry deposition. We show that the Binned Algorithm can dramatically improve the efficiency of particle removals, particularly for low removalmore » rates, and that computational cost is reduced without introducing additional error. In simulations of aerosol particle removal by dry deposition in atmospherically relevant conditions, we demonstrate about 50-times increase in algorithm efficiency.« less
NASA Astrophysics Data System (ADS)
Smith, Lyndon N.; Smith, Melvyn L.
2000-10-01
Particulate materials undergo processing in many industries, and therefore there are significant commercial motivators for attaining improvements in the flow and packing behavior of powders. This can be achieved by modeling the effects of particle size, friction, and most importantly, particle shape or morphology. The method presented here for simulating powders employs a random number generator to construct a model of a random particle by combining a sphere with a number of smaller spheres. The resulting 3D model particle has a nodular type of morphology, which is similar to that exhibited by the atomized powders that are used in the bulk of powder metallurgy (PM) manufacture. The irregularity of the model particles is dependent upon vision system data gathered from microscopic analysis of real powder particles. A methodology is proposed whereby randomly generated model particles of various sized and irregularities can be combined in a random packing simulation. The proposed Monte Carlo technique would allow incorporation of the effects of gravity, wall friction, and inter-particle friction. The improvements in simulation realism that this method is expected to provide would prove useful for controlling powder production, and for predicting die fill behavior during the production of PM parts.
Optimisation of 12 MeV electron beam simulation using variance reduction technique
NASA Astrophysics Data System (ADS)
Jayamani, J.; Termizi, N. A. S. Mohd; Kamarulzaman, F. N. Mohd; Aziz, M. Z. Abdul
2017-05-01
Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 107 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 107 to 20 × 107. In this study, 5 MeV electron cut-off with 10 × 107 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy.
Pressure calculation in hybrid particle-field simulations
NASA Astrophysics Data System (ADS)
Milano, Giuseppe; Kawakatsu, Toshihiro
2010-12-01
In the framework of a recently developed scheme for a hybrid particle-field simulation techniques where self-consistent field (SCF) theory and particle models (molecular dynamics) are combined [J. Chem. Phys. 130, 214106 (2009)], we developed a general formulation for the calculation of instantaneous pressure and stress tensor. The expressions have been derived from statistical mechanical definition of the pressure starting from the expression for the free energy functional in the SCF theory. An implementation of the derived formulation suitable for hybrid particle-field molecular dynamics-self-consistent field simulations is described. A series of test simulations on model systems are reported comparing the calculated pressure with those obtained from standard molecular dynamics simulations based on pair potentials.
Particle simulation of plasmas and stellar systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tajima, T.; Clark, A.; Craddock, G.G.
1985-04-01
A computational technique is introduced which allows the student and researcher an opportunity to observe the physical behavior of a class of many-body systems. A series of examples is offered which illustrates the diversity of problems that may be studied using particle simulation. These simulations were in fact assigned as homework in a course on computational physics.
NASA Astrophysics Data System (ADS)
Raymond, Samuel J.; Jones, Bruce; Williams, John R.
2018-01-01
A strategy is introduced to allow coupling of the material point method (MPM) and smoothed particle hydrodynamics (SPH) for numerical simulations. This new strategy partitions the domain into SPH and MPM regions, particles carry all state variables and as such no special treatment is required for the transition between regions. The aim of this work is to derive and validate the coupling methodology between MPM and SPH. Such coupling allows for general boundary conditions to be used in an SPH simulation without further augmentation. Additionally, as SPH is a purely particle method, and MPM is a combination of particles and a mesh. This coupling also permits a smooth transition from particle methods to mesh methods, where further coupling to mesh methods could in future provide an effective farfield boundary treatment for the SPH method. The coupling technique is introduced and described alongside a number of simulations in 1D and 2D to validate and contextualize the potential of using these two methods in a single simulation. The strategy shown here is capable of fully coupling the two methods without any complicated algorithms to transform information from one method to another.
Automatic Beam Path Analysis of Laser Wakefield Particle Acceleration Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Geddes, Cameron G.R.; Cormier-Michel, Estelle
2009-10-19
Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps,more » then integrating features across timesteps, and based on the information derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than hasbeen possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.« less
Simulation of particle size distributions in Polar Mesospheric Clouds from Microphysical Models
NASA Astrophysics Data System (ADS)
Thomas, G. E.; Merkel, A.; Bardeen, C.; Rusch, D. W.; Lumpe, J. D.
2009-12-01
The size distribution of ice particles is perhaps the most important observable aspect of microphysical processes in Polar Mesospheric Cloud (PMC) formation and evolution. A conventional technique to derive such information is from optical observation of scattering, either passive solar scattering from photometric or spectrometric techniques, or active backscattering by lidar. We present simulated size distributions from two state-of-the-art models using CARMA sectional microphysics: WACCM/CARMA, in which CARMA is interactively coupled with WACCM3 (Bardeen et al, 2009), and stand-alone CARMA forced by WACCM3 meteorology (Merkel et al, this meeting). Both models provide well-resolved size distributions of ice particles as a function of height, location and time for realistic high-latitude summertime conditions. In this paper we present calculations of the UV scattered brightness at multiple scattering angles as viewed by the AIM Cloud Imaging and Particle Size (CIPS) satellite experiment. These simulations are then considered discretely-sampled “data” for the scattering phase function, which are inverted using a technique (Lumpe et al, this meeting) to retrieve particle size information. We employ a T-matrix scattering code which applies to a wide range of non-sphericity of the ice particles, using the conventional idealized prolate/oblate spheroidal shape. This end-to-end test of the relatively new scattering phase function technique provides insight into both the retrieval accuracy and the information content in passive remote sensing of PMC.
A Coulomb collision algorithm for weighted particle simulations
NASA Technical Reports Server (NTRS)
Miller, Ronald H.; Combi, Michael R.
1994-01-01
A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.
NASA Astrophysics Data System (ADS)
Gao, Xiatian; Wang, Xiaogang; Jiang, Binhao
2017-10-01
UPSF (Universal Plasma Simulation Framework) is a new plasma simulation code designed for maximum flexibility by using edge-cutting techniques supported by C++17 standard. Through use of metaprogramming technique, UPSF provides arbitrary dimensional data structures and methods to support various kinds of plasma simulation models, like, Vlasov, particle in cell (PIC), fluid, Fokker-Planck, and their variants and hybrid methods. Through C++ metaprogramming technique, a single code can be used to arbitrary dimensional systems with no loss of performance. UPSF can also automatically parallelize the distributed data structure and accelerate matrix and tensor operations by BLAS. A three-dimensional particle in cell code is developed based on UPSF. Two test cases, Landau damping and Weibel instability for electrostatic and electromagnetic situation respectively, are presented to show the validation and performance of the UPSF code.
Boda, Dezső; Gillespie, Dirk
2012-03-13
We propose a procedure to compute the steady-state transport of charged particles based on the Nernst-Planck (NP) equation of electrodiffusion. To close the NP equation and to establish a relation between the concentration and electrochemical potential profiles, we introduce the Local Equilibrium Monte Carlo (LEMC) method. In this method, Grand Canonical Monte Carlo simulations are performed using the electrochemical potential specified for the distinct volume elements. An iteration procedure that self-consistently solves the NP and flux continuity equations with LEMC is shown to converge quickly. This NP+LEMC technique can be used in systems with diffusion of charged or uncharged particles in complex three-dimensional geometries, including systems with low concentrations and small applied voltages that are difficult for other particle simulation techniques.
Layout-aware simulation of soft errors in sub-100 nm integrated circuits
NASA Astrophysics Data System (ADS)
Balbekov, A.; Gorbunov, M.; Bobkov, S.
2016-12-01
Single Event Transient (SET) caused by charged particle traveling through the sensitive volume of integral circuit (IC) may lead to different errors in digital circuits in some cases. In technologies below 180 nm, a single particle can affect multiple devices causing multiple SET. This fact adds the complexity to fault tolerant devices design, because the schematic design techniques become useless without their layout consideration. The most common layout mitigation technique is a spatial separation of sensitive nodes of hardened circuits. Spatial separation decreases the circuit performance and increases power consumption. Spacing should thus be reasonable and its scaling follows the device dimensions' scaling trend. This paper presents the development of the SET simulation approach comprised of SPICE simulation with "double exponent" current source as SET model. The technique uses layout in GDSII format to locate nearby devices that can be affected by a single particle and that can share the generated charge. The developed software tool automatizes multiple simulations and gathers the produced data to present it as the sensitivity map. The examples of conducted simulations of fault tolerant cells and their sensitivity maps are presented in this paper.
SPH Numerical Modeling for the Wave-Thin Structure Interaction
NASA Astrophysics Data System (ADS)
Ren, Xi-feng; Sun, Zhao-chen; Wang, Xing-gang; Liang, Shu-xiu
2018-04-01
In this paper, a numerical model of 2D weakly compressible smoothed particle hydrodynamics (WCSPH) is developed to simulate the interaction between waves and thin structures. A new color domain particle (CDP) technique is proposed to overcome difficulties of applying the ghost particle method to thin structures in dealing with solid boundaries. The new technique can deal with zero-thickness structures. To apply this enforcing technique, the computational fluid domain is divided into sub domains, i.e., boundary domains and internal domains. A color value is assigned to each particle, and contains the information of the domains in which the particle belongs to and the particles can interact with. A particle, nearby a thin boundary, is prevented from interacting with particles, which should not interact with on the other side of the structure. It is possible to model thin structures, or the structures with the thickness negligible with this technique. The proposed WCSPH module is validated for a still water tank, divided by a thin plate at the middle section, with different water levels in the subdomains, and is applied to simulate the interaction between regular waves and a perforated vertical plate. Finally, the computation is carried out for waves and submerged twin-horizontal plate interaction. It is shown that the numerical results agree well with experimental data in terms of the pressure distribution, pressure time series and wave transmission.
Modeling and simulation of dust behaviors behind a moving vehicle
NASA Astrophysics Data System (ADS)
Wang, Jingfang
Simulation of physically realistic complex dust behaviors is a difficult and attractive problem in computer graphics. A fast, interactive and visually convincing model of dust behaviors behind moving vehicles is very useful in computer simulation, training, education, art, advertising, and entertainment. In my dissertation, an experimental interactive system has been implemented for the simulation of dust behaviors behind moving vehicles. The system includes physically-based models, particle systems, rendering engines and graphical user interface (GUI). I have employed several vehicle models including tanks, cars, and jeeps to test and simulate in different scenarios and conditions. Calm weather, winding condition, vehicle turning left or right, and vehicle simulation controlled by users from the GUI are all included. I have also tested the factors which play against the physical behaviors and graphics appearances of the dust particles through GUI or off-line scripts. The simulations are done on a Silicon Graphics Octane station. The animation of dust behaviors is achieved by physically-based modeling and simulation. The flow around a moving vehicle is modeled using computational fluid dynamics (CFD) techniques. I implement a primitive variable and pressure-correction approach to solve the three dimensional incompressible Navier Stokes equations in a volume covering the moving vehicle. An alternating- direction implicit (ADI) method is used for the solution of the momentum equations, with a successive-over- relaxation (SOR) method for the solution of the Poisson pressure equation. Boundary conditions are defined and simplified according to their dynamic properties. The dust particle dynamics is modeled using particle systems, statistics, and procedure modeling techniques. Graphics and real-time simulation techniques, such as dynamics synchronization, motion blur, blending, and clipping have been employed in the rendering to achieve realistic appearing dust behaviors. In addition, I introduce a temporal smoothing technique to eliminate the jagged effect caused by large simulation time. Several algorithms are used to speed up the simulation. For example, pre-calculated tables and display lists are created to replace some of the most commonly used functions, scripts and processes. The performance study shows that both time and space costs of the algorithms are linear in the number of particles in the system. On a Silicon Graphics Octane, three vehicles with 20,000 particles run at 6-8 frames per second on average. This speed does not include the extra calculations of convergence of the numerical integration for fluid dynamics which usually takes about 4-5 minutes to achieve steady state.
A method to reproduce alpha-particle spectra measured with semiconductor detectors.
Timón, A Fernández; Vargas, M Jurado; Sánchez, A Martín
2010-01-01
A method is proposed to reproduce alpha-particle spectra measured with silicon detectors, combining analytical and computer simulation techniques. The procedure includes the use of the Monte Carlo method to simulate the tracks of alpha-particles within the source and in the detector entrance window. The alpha-particle spectrum is finally obtained by the convolution of this simulated distribution and the theoretical distributions representing the contributions of the alpha-particle spectrometer to the spectrum. Experimental spectra from (233)U and (241)Am sources were compared with the predictions given by the proposed procedure, showing good agreement. The proposed method can be an important aid for the analysis and deconvolution of complex alpha-particle spectra. Copyright 2009 Elsevier Ltd. All rights reserved.
A technique to remove the tensile instability in weakly compressible SPH
NASA Astrophysics Data System (ADS)
Xu, Xiaoyang; Yu, Peng
2018-01-01
When smoothed particle hydrodynamics (SPH) is directly applied for the numerical simulations of transient viscoelastic free surface flows, a numerical problem called tensile instability arises. In this paper, we develop an optimized particle shifting technique to remove the tensile instability in SPH. The basic equations governing free surface flow of an Oldroyd-B fluid are considered, and approximated by an improved SPH scheme. This includes the implementations of the correction of kernel gradient and the introduction of Rusanov flux into the continuity equation. To verify the effectiveness of the optimized particle shifting technique in removing the tensile instability, the impacting drop, the injection molding of a C-shaped cavity, and the extrudate swell, are conducted. The numerical results obtained are compared with those simulated by other numerical methods. A comparison among different numerical techniques (e.g., the artificial stress) to remove the tensile instability is further performed. All numerical results agree well with the available data.
NASA Astrophysics Data System (ADS)
Bolhuis, Peter
Important reaction-diffusion processes, such as biochemical networks in living cells, or self-assembling soft matter, span many orders in length and time scales. In these systems, the reactants' spatial dynamics at mesoscopic length and time scales of microns and seconds is coupled to the reactions between the molecules at microscopic length and time scales of nanometers and milliseconds. This wide range of length and time scales makes these systems notoriously difficult to simulate. While mean-field rate equations cannot describe such processes, the mesoscopic Green's Function Reaction Dynamics (GFRD) method enables efficient simulation at the particle level provided the microscopic dynamics can be integrated out. Yet, many processes exhibit non-trivial microscopic dynamics that can qualitatively change the macroscopic behavior, calling for an atomistic, microscopic description. The recently developed multiscale Molecular Dynamics Green's Function Reaction Dynamics (MD-GFRD) approach combines GFRD for simulating the system at the mesocopic scale where particles are far apart, with microscopic Molecular (or Brownian) Dynamics, for simulating the system at the microscopic scale where reactants are in close proximity. The association and dissociation of particles are treated with rare event path sampling techniques. I will illustrate the efficiency of this method for patchy particle systems. Replacing the microscopic regime with a Markov State Model avoids the microscopic regime completely. The MSM is then pre-computed using advanced path-sampling techniques such as multistate transition interface sampling. I illustrate this approach on patchy particle systems that show multiple modes of binding. MD-GFRD is generic, and can be used to efficiently simulate reaction-diffusion systems at the particle level, including the orientational dynamics, opening up the possibility for large-scale simulations of e.g. protein signaling networks.
Augmenting Sand Simulation Environments through Subdivision and Particle Refinement
NASA Astrophysics Data System (ADS)
Clothier, M.; Bailey, M.
2012-12-01
Recent advances in computer graphics and parallel processing hardware have provided disciplines with new methods to evaluate and visualize data. These advances have proven useful for earth and planetary scientists as many researchers are using this hardware to process large amounts of data for analysis. As such, this has provided opportunities for collaboration between computer graphics and the earth sciences. Through collaboration with the Oregon Space Grant and IGERT Ecosystem Informatics programs, we are investigating techniques for simulating the behavior of sand. We are also collaborating with the Jet Propulsion Laboratory's (JPL) DARTS Lab to exchange ideas and gain feedback on our research. The DARTS Lab specializes in simulation of planetary vehicles, such as the Mars rovers. Their simulations utilize a virtual "sand box" to test how a planetary vehicle responds to different environments. Our research builds upon this idea to create a sand simulation framework so that planetary environments, such as the harsh, sandy regions on Mars, are more fully realized. More specifically, we are focusing our research on the interaction between a planetary vehicle, such as a rover, and the sand beneath it, providing further insight into its performance. Unfortunately, this can be a computationally complex problem, especially if trying to represent the enormous quantities of sand particles interacting with each other. However, through the use of high-performance computing, we have developed a technique to subdivide areas of actively participating sand regions across a large landscape. Similar to a Level of Detail (LOD) technique, we only subdivide regions of a landscape where sand particles are actively participating with another object. While the sand is within this subdivision window and moves closer to the surface of the interacting object, the sand region subdivides into smaller regions until individual sand particles are left at the surface. As an example, let's say there is a planetary rover interacting with our sand simulation environment. Sand that is actively interacting with a rover wheel will be represented as individual particles whereas sand that is further under the surface will be represented by larger regions of sand. The result of this technique allows for many particles to be represented without the computational complexity. In developing this method, we have further generalized these subdivision regions into any volumetric area suitable for use in the simulation. This is a further improvement of our method as it allows for more compact subdivision sand regions. This helps to fine tune the simulation so that more emphasis can be placed on regions of actively participating sand. We feel that through the generalization of our technique, our research can provide other opportunities within the earth and planetary sciences. Through collaboration with our academic colleagues, we continue to refine our technique and look for other opportunities to utilize our research.
Numerical analysis of wet separation of particles by density differences
NASA Astrophysics Data System (ADS)
Markauskas, D.; Kruggel-Emden, H.
2017-07-01
Wet particle separation is widely used in mineral processing and plastic recycling to separate mixtures of particulate materials into further usable fractions due to density differences. This work presents efforts aiming to numerically analyze the wet separation of particles with different densities. In the current study the discrete element method (DEM) is used for the solid phase while the smoothed particle hydrodynamics (SPH) is used for modeling of the liquid phase. The two phases are coupled by the use of a volume averaging technique. In the current study, simulations of spherical particle separation were performed. In these simulations, a set of generated particles with two different densities is dropped into a rectangular container filled with liquid. The results of simulations with two different mixtures of particles demonstrated how separation depends on the densities of particles.
Charge-Spot Model for Electrostatic Forces in Simulation of Fine Particulates
NASA Technical Reports Server (NTRS)
Walton, Otis R.; Johnson, Scott M.
2010-01-01
The charge-spot technique for modeling the static electric forces acting between charged fine particles entails treating electric charges on individual particles as small sets of discrete point charges, located near their surfaces. This is in contrast to existing models, which assume a single charge per particle. The charge-spot technique more accurately describes the forces, torques, and moments that act on triboelectrically charged particles, especially image-charge forces acting near conducting surfaces. The discrete element method (DEM) simulation uses a truncation range to limit the number of near-neighbor charge spots via a shifted and truncated potential Coulomb interaction. The model can be readily adapted to account for induced dipoles in uncharged particles (and thus dielectrophoretic forces) by allowing two charge spots of opposite signs to be created in response to an external electric field. To account for virtual overlap during contacts, the model can be set to automatically scale down the effective charge in proportion to the amount of virtual overlap of the charge spots. This can be accomplished by mimicking the behavior of two real overlapping spherical charge clouds, or with other approximate forms. The charge-spot method much more closely resembles real non-uniform surface charge distributions that result from tribocharging than simpler approaches, which just assign a single total charge to a particle. With the charge-spot model, a single particle may have a zero net charge, but still have both positive and negative charge spots, which could produce substantial forces on the particle when it is close to other charges, when it is in an external electric field, or when near a conducting surface. Since the charge-spot model can contain any number of charges per particle, can be used with only one or two charge spots per particle for simulating charging from solar wind bombardment, or with several charge spots for simulating triboelectric charging. Adhesive image-charge forces acting on charged particles touching conducting surfaces can be up to 50 times stronger if the charge is located in discrete spots on the particle surface instead of being distributed uniformly over the surface of the particle, as is assumed by most other models. Besides being useful in modeling particulates in space and distant objects, this modeling technique is useful for electrophotography (used in copiers) and in simulating the effects of static charge in the pulmonary delivery of fine dry powders.
NASA Astrophysics Data System (ADS)
Hoteit, I.; Hollt, T.; Hadwiger, M.; Knio, O. M.; Gopalakrishnan, G.; Zhan, P.
2016-02-01
Ocean reanalyses and forecasts are nowadays generated by combining ensemble simulations with data assimilation techniques. Most of these techniques resample the ensemble members after each assimilation cycle. Tracking behavior over time, such as all possible paths of a particle in an ensemble vector field, becomes very difficult, as the number of combinations rises exponentially with the number of assimilation cycles. In general a single possible path is not of interest but only the probabilities that any point in space might be reached by a particle at some point in time. We present an approach using probability-weighted piecewise particle trajectories to allow for interactive probability mapping. This is achieved by binning the domain and splitting up the tracing process into the individual assimilation cycles, so that particles that fall into the same bin after a cycle can be treated as a single particle with a larger probability as input for the next cycle. As a result we loose the possibility to track individual particles, but can create probability maps for any desired seed at interactive rates. The technique is integrated in an interactive visualization system that enables the visual analysis of the particle traces side by side with other forecast variables, such as the sea surface height, and their corresponding behavior over time. By harnessing the power of modern graphics processing units (GPUs) for visualization as well as computation, our system allows the user to browse through the simulation ensembles in real-time, view specific parameter settings or simulation models and move between different spatial or temporal regions without delay. In addition our system provides advanced visualizations to highlight the uncertainty, or show the complete distribution of the simulations at user-defined positions over the complete time series of the domain.
Two-way coupled SPH and particle level set fluid simulation.
Losasso, Frank; Talton, Jerry; Kwatra, Nipun; Fedkiw, Ronald
2008-01-01
Grid-based methods have difficulty resolving features on or below the scale of the underlying grid. Although adaptive methods (e.g. RLE, octrees) can alleviate this to some degree, separate techniques are still required for simulating small-scale phenomena such as spray and foam, especially since these more diffuse materials typically behave quite differently than their denser counterparts. In this paper, we propose a two-way coupled simulation framework that uses the particle level set method to efficiently model dense liquid volumes and a smoothed particle hydrodynamics (SPH) method to simulate diffuse regions such as sprays. Our novel SPH method allows us to simulate both dense and diffuse water volumes, fully incorporates the particles that are automatically generated by the particle level set method in under-resolved regions, and allows for two way mixing between dense SPH volumes and grid-based liquid representations.
NASA Astrophysics Data System (ADS)
Paganini, Michela; de Oliveira, Luke; Nachman, Benjamin
2018-01-01
The precise modeling of subatomic particle interactions and propagation through matter is paramount for the advancement of nuclear and particle physics searches and precision measurements. The most computationally expensive step in the simulation pipeline of a typical experiment at the Large Hadron Collider (LHC) is the detailed modeling of the full complexity of physics processes that govern the motion and evolution of particle showers inside calorimeters. We introduce CaloGAN, a new fast simulation technique based on generative adversarial networks (GANs). We apply these neural networks to the modeling of electromagnetic showers in a longitudinally segmented calorimeter and achieve speedup factors comparable to or better than existing full simulation techniques on CPU (100 ×-1000 × ) and even faster on GPU (up to ˜105× ). There are still challenges for achieving precision across the entire phase space, but our solution can reproduce a variety of geometric shower shape properties of photons, positrons, and charged pions. This represents a significant stepping stone toward a full neural network-based detector simulation that could save significant computing time and enable many analyses now and in the future.
NASA Astrophysics Data System (ADS)
Bi, L.
2016-12-01
Atmospheric remote sensing based on the Lidar technique fundamentally relies on knowledge of the backscattering of light by particulate matters in the atmosphere. This talk starts with a review of the current capabilities of electromagnetic wave scattering simulations to determine the backscattering optical properties of irregular particles, such as the backscatterer and depolarization ratio. This will be followed by a discussion of possible pitfalls in the relevant simulations. The talk will then be concluded with reports on the latest advancements in computational techniques. In addition, we summarize the laws of the backscattering optical properties of aerosols with respect to particle geometries, particle sizes, and mixing rules. These advancements will be applied to the analysis of the Lidar observation data to reveal the state and possible microphysical processes of various aerosols.
Kanarska, Yuliya; Walton, Otis
2015-11-30
Fluid-granular flows are common phenomena in nature and industry. Here, an efficient computational technique based on the distributed Lagrange multiplier method is utilized to simulate complex fluid-granular flows. Each particle is explicitly resolved on an Eulerian grid as a separate domain, using solid volume fractions. The fluid equations are solved through the entire computational domain, however, Lagrange multiplier constrains are applied inside the particle domain such that the fluid within any volume associated with a solid particle moves as an incompressible rigid body. The particle–particle interactions are implemented using explicit force-displacement interactions for frictional inelastic particles similar to the DEMmore » method with some modifications using the volume of an overlapping region as an input to the contact forces. Here, a parallel implementation of the method is based on the SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) library.« less
Plasma Modeling with Speed-Limited Particle-in-Cell Techniques
NASA Astrophysics Data System (ADS)
Jenkins, Thomas G.; Werner, G. R.; Cary, J. R.; Stoltz, P. H.
2017-10-01
Speed-limited particle-in-cell (SLPIC) modeling is a new particle simulation technique for modeling systems wherein numerical constraints, e.g. limitations on timestep size required for numerical stability, are significantly more restrictive than is needed to model slower kinetic processes of interest. SLPIC imposes artificial speed-limiting behavior on fast particles whose kinetics do not play meaningful roles in the system dynamics, thus enabling larger simulation timesteps and more rapid modeling of such plasma discharges. The use of SLPIC methods to model plasma sheath formation and the free expansion of plasma into vacuum will be demonstrated. Wallclock times for these simulations, relative to conventional PIC, are reduced by a factor of 2.5 for the plasma expansion problem and by over 6 for the sheath formation problem; additional speedup is likely possible. Physical quantities of interest are shown to be correct for these benchmark problems. Additional SLPIC applications will also be discussed. Supported by US DoE SBIR Phase I/II Award DE-SC0015762.
Three-dimensional particle-particle simulations: Dependence of relaxation time on plasma parameter
NASA Astrophysics Data System (ADS)
Zhao, Yinjian
2018-05-01
A particle-particle simulation model is applied to investigate the dependence of the relaxation time on the plasma parameter in a three-dimensional unmagnetized plasma. It is found that the relaxation time increases linearly as the plasma parameter increases within the range of the plasma parameter from 2 to 10; when the plasma parameter equals 2, the relaxation time is independent of the total number of particles, but when the plasma parameter equals 10, the relaxation time slightly increases as the total number of particles increases, which indicates the transition of a plasma from collisional to collisionless. In addition, ions with initial Maxwell-Boltzmann (MB) distribution are found to stay in the MB distribution during the whole simulation time, and the mass of ions does not significantly affect the relaxation time of electrons. This work also shows the feasibility of the particle-particle model when using GPU parallel computing techniques.
A Novel Approach to Visualizing Dark Matter Simulations.
Kaehler, R; Hahn, O; Abel, T
2012-12-01
In the last decades cosmological N-body dark matter simulations have enabled ab initio studies of the formation of structure in the Universe. Gravity amplified small density fluctuations generated shortly after the Big Bang, leading to the formation of galaxies in the cosmic web. These calculations have led to a growing demand for methods to analyze time-dependent particle based simulations. Rendering methods for such N-body simulation data usually employ some kind of splatting approach via point based rendering primitives and approximate the spatial distributions of physical quantities using kernel interpolation techniques, common in SPH (Smoothed Particle Hydrodynamics)-codes. This paper proposes three GPU-assisted rendering approaches, based on a new, more accurate method to compute the physical densities of dark matter simulation data. It uses full phase-space information to generate a tetrahedral tessellation of the computational domain, with mesh vertices defined by the simulation's dark matter particle positions. Over time the mesh is deformed by gravitational forces, causing the tetrahedral cells to warp and overlap. The new methods are well suited to visualize the cosmic web. In particular they preserve caustics, regions of high density that emerge, when several streams of dark matter particles share the same location in space, indicating the formation of structures like sheets, filaments and halos. We demonstrate the superior image quality of the new approaches in a comparison with three standard rendering techniques for N-body simulation data.
Radiation dominated acoustophoresis driven by surface acoustic waves.
Guo, Jinhong; Kang, Yuejun; Ai, Ye
2015-10-01
Acoustophoresis-based particle manipulation in microfluidics has gained increasing attention in recent years. Despite the fact that experimental studies have been extensively performed to demonstrate this technique for various microfluidic applications, numerical simulation of acoustophoresis driven by surface acoustic waves (SAWs) has still been largely unexplored. In this work, a numerical model taking into account the acoustic-piezoelectric interaction was developed to simulate the generation of a standing surface acoustic wave (SSAW) field and predict the acoustic pressure field in the liquid. Acoustic radiation dominated particle tracing was performed to simulate acoustophoresis of particles with different sizes undergoing a SSAW field. A microfluidic device composed of two interdigital transducers (IDTs) for SAW generation and a microfluidic channel was fabricated for experimental validation. Numerical simulations could well capture the focusing phenomenon of particles to the pressure nodes in the experimental observation. Further comparison of particle trajectories demonstrated considerably quantitative agreement between numerical simulations and experimental results with fitting in the applied voltage. Particle switching was also demonstrated using the fabricated device that could be further developed as an active particle sorting device. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Casse, F.; van Marle, A. J.; Marcowith, A.
2018-01-01
We present simulations of magnetized astrophysical shocks taking into account the interplay between the thermal plasma of the shock and supra-thermal particles. Such interaction is depicted by combining a grid-based magneto-hydrodynamics description of the thermal fluid with particle-in-cell techniques devoted to the dynamics of supra-thermal particles. This approach, which incorporates the use of adaptive mesh refinement features, is potentially a key to simulate astrophysical systems on spatial scales that are beyond the reach of pure particle-in-cell simulations. We consider non-relativistic super-Alfénic shocks with various magnetic field obliquity. We recover all the features from previous studies when the magnetic field is parallel to the normal to the shock. In contrast with previous particle-in-cell and hybrid simulations, we find that particle acceleration and magnetic field amplification also occur when the magnetic field is oblique to the normal to the shock but on larger timescales than in the parallel case. We show that in our oblique shock simulations the streaming of supra-thermal particles induces a corrugation of the shock front. Such oscillations of both the shock front and the magnetic field then locally helps the particles to enter the upstream region and to initiate a non-resonant streaming instability and finally to induce diffuse particle acceleration.
Determination of meteor parameters using laboratory simulation techniques
NASA Technical Reports Server (NTRS)
Friichtenicht, J. F.; Becker, D. G.
1973-01-01
Atmospheric entry of meteoritic bodies is conveniently and accurately simulated in the laboratory by techniques which employ the charging and electrostatic acceleration of macroscopic solid particles. Velocities from below 10 to above 50 km/s are achieved for particle materials which are elemental meteoroid constituents or mineral compounds with characteristics similar to those of meteoritic stone. The velocity, mass, and kinetic energy of each particle are measured nondestructively, after which the particle enters a target gas region. Because of the small particle size, free molecule flow is obtained. At typical operating pressures (0.1 to 0.5 torr), complete particle ablation occurs over distances of 25 to 50 cm; the spatial extent of the atmospheric interaction phenomena is correspondingly small. Procedures have been developed for measuring the spectrum of light from luminous trails and the values of fundamental quantities defined in meteor theory. It is shown that laboratory values for iron are in excellent agreement with those for 9 to 11 km/s artificial meteors produced by rocket injection of iron bodies into the atmosphere.
Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.
Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard
2012-06-07
We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems.
Partial molar enthalpies and reaction enthalpies from equilibrium molecular dynamics simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schnell, Sondre K.; Department of Chemical and Biomolecular Engineering, University of California, Berkeley, California 94720; Department of Chemistry, Faculty of Natural Science and Technology, Norwegian University of Science and Technology, 4791 Trondheim
2014-10-14
We present a new molecular simulation technique for determining partial molar enthalpies in mixtures of gases and liquids from single simulations, without relying on particle insertions, deletions, or identity changes. The method can also be applied to systems with chemical reactions. We demonstrate our method for binary mixtures of Weeks-Chandler-Anderson particles by comparing with conventional simulation techniques, as well as for a simple model that mimics a chemical reaction. The method considers small subsystems inside a large reservoir (i.e., the simulation box), and uses the construction of Hill to compute properties in the thermodynamic limit from small-scale fluctuations. Results obtainedmore » with the new method are in excellent agreement with those from previous methods. Especially for modeling chemical reactions, our method can be a valuable tool for determining reaction enthalpies directly from a single MD simulation.« less
Hybrid modeling method for a DEP based particle manipulation.
Miled, Mohamed Amine; Gagne, Antoine; Sawan, Mohamad
2013-01-30
In this paper, a new modeling approach for Dielectrophoresis (DEP) based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results.
Hybrid Modeling Method for a DEP Based Particle Manipulation
Miled, Mohamed Amine; Gagne, Antoine; Sawan, Mohamad
2013-01-01
In this paper, a new modeling approach for Dielectrophoresis (DEP) based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results. PMID:23364197
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romanov, Gennady; /Fermilab
CST Particle Studio combines electromagnetic field simulation, multi-particle tracking, adequate post-processing and advanced probabilistic emission model, which is the most important new capability in multipactor simulation. The emission model includes in simulation the stochastic properties of emission and adds primary electron elastic and inelastic reflection from the surfaces. The simulation of multipactor in coaxial waveguides have been performed to study the effects of the innovations on the multipactor threshold and the range over which multipactor can occur. The results compared with available previous experiments and simulations as well as the technique of MP simulation with CST PS are presented andmore » discussed.« less
NASA Astrophysics Data System (ADS)
van Marle, Allard Jan; Casse, Fabien; Marcowith, Alexandre
2018-01-01
We present simulations of magnetized astrophysical shocks taking into account the interplay between the thermal plasma of the shock and suprathermal particles. Such interaction is depicted by combining a grid-based magnetohydrodynamics description of the thermal fluid with particle in cell techniques devoted to the dynamics of suprathermal particles. This approach, which incorporates the use of adaptive mesh refinement features, is potentially a key to simulate astrophysical systems on spatial scales that are beyond the reach of pure particle-in-cell simulations. We consider in this study non-relativistic shocks with various Alfvénic Mach numbers and magnetic field obliquity. We recover all the features of both magnetic field amplification and particle acceleration from previous studies when the magnetic field is parallel to the normal to the shock. In contrast with previous particle-in-cell-hybrid simulations, we find that particle acceleration and magnetic field amplification also occur when the magnetic field is oblique to the normal to the shock but on larger time-scales than in the parallel case. We show that in our simulations, the suprathermal particles are experiencing acceleration thanks to a pre-heating process of the particle similar to a shock drift acceleration leading to the corrugation of the shock front. Such oscillations of the shock front and the magnetic field locally help the particles to enter the upstream region and to initiate a non-resonant streaming instability and finally to induce diffuse particle acceleration.
A hybrid method with deviational particles for spatial inhomogeneous plasma
NASA Astrophysics Data System (ADS)
Yan, Bokai
2016-03-01
In this work we propose a Hybrid method with Deviational Particles (HDP) for a plasma modeled by the inhomogeneous Vlasov-Poisson-Landau system. We split the distribution into a Maxwellian part evolved by a grid based fluid solver and a deviation part simulated by numerical particles. These particles, named deviational particles, could be both positive and negative. We combine the Monte Carlo method proposed in [31], a Particle in Cell method and a Macro-Micro decomposition method [3] to design an efficient hybrid method. Furthermore, coarse particles are employed to accelerate the simulation. A particle resampling technique on both deviational particles and coarse particles is also investigated and improved. This method is applicable in all regimes and significantly more efficient compared to a PIC-DSMC method near the fluid regime.
A splitting integration scheme for the SPH simulation of concentrated particle suspensions
NASA Astrophysics Data System (ADS)
Bian, Xin; Ellero, Marco
2014-01-01
Simulating nearly contacting solid particles in suspension is a challenging task due to the diverging behavior of short-range lubrication forces, which pose a serious time-step limitation for explicit integration schemes. This general difficulty limits severely the total duration of simulations of concentrated suspensions. Inspired by the ideas developed in [S. Litvinov, M. Ellero, X.Y. Hu, N.A. Adams, J. Comput. Phys. 229 (2010) 5457-5464] for the simulation of highly dissipative fluids, we propose in this work a splitting integration scheme for the direct simulation of solid particles suspended in a Newtonian liquid. The scheme separates the contributions of different forces acting on the solid particles. In particular, intermediate- and long-range multi-body hydrodynamic forces, which are computed from the discretization of the Navier-Stokes equations using the smoothed particle hydrodynamics (SPH) method, are taken into account using an explicit integration; for short-range lubrication forces, velocities of pairwise interacting solid particles are updated implicitly by sweeping over all the neighboring pairs iteratively, until convergence in the solution is obtained. By using the splitting integration, simulations can be run stably and efficiently up to very large solid particle concentrations. Moreover, the proposed scheme is not limited to the SPH method presented here, but can be easily applied to other simulation techniques employed for particulate suspensions.
Fast Simulation of Electromagnetic Showers in the ATLAS Calorimeter: Frozen Showers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barberio, E.; /Melbourne U.; Boudreau, J.
2011-11-29
One of the most time consuming process simulating pp interactions in the ATLAS detector at LHC is the simulation of electromagnetic showers in the calorimeter. In order to speed up the event simulation several parametrisation methods are available in ATLAS. In this paper we present a short description of a frozen shower technique, together with some recent benchmarks and comparison with full simulation. An expected high rate of proton-proton collisions in ATLAS detector at LHC requires large samples of simulated events (Monte Carlo) to study various physics processes. A detailed simulation of particle reactions ('full simulation') in the ATLAS detectormore » is based on GEANT4 and is very accurate. However, due to complexity of the detector, high particle multiplicity and GEANT4 itself, the average CPU time spend to simulate typical QCD event in pp collision is 20 or more minutes for modern computers. During detector simulation the largest time is spend in the calorimeters (up to 70%) most of which is required for electromagnetic particles in the electromagnetic (EM) part of the calorimeters. This is the motivation for fast simulation approaches which reduce the simulation time without affecting the accuracy. Several of fast simulation methods available within the ATLAS simulation framework (standard Athena based simulation program) are discussed here with the focus on the novel frozen shower library (FS) technique. The results obtained with FS are presented here as well.« less
Sevink, G J A; Schmid, F; Kawakatsu, T; Milano, G
2017-02-22
We have extended an existing hybrid MD-SCF simulation technique that employs a coarsening step to enhance the computational efficiency of evaluating non-bonded particle interactions. This technique is conceptually equivalent to the single chain in mean-field (SCMF) method in polymer physics, in the sense that non-bonded interactions are derived from the non-ideal chemical potential in self-consistent field (SCF) theory, after a particle-to-field projection. In contrast to SCMF, however, MD-SCF evolves particle coordinates by the usual Newton's equation of motion. Since collisions are seriously affected by the softening of non-bonded interactions that originates from their evaluation at the coarser continuum level, we have devised a way to reinsert the effect of collisions on the structural evolution. Merging MD-SCF with multi-particle collision dynamics (MPCD), we mimic particle collisions at the level of computational cells and at the same time properly account for the momentum transfer that is important for a realistic system evolution. The resulting hybrid MD-SCF/MPCD method was validated for a particular coarse-grained model of phospholipids in aqueous solution, against reference full-particle simulations and the original MD-SCF model. We additionally implemented and tested an alternative and more isotropic finite difference gradient. Our results show that efficiency is improved by merging MD-SCF with MPCD, as properly accounting for hydrodynamic interactions considerably speeds up the phase separation dynamics, with negligible additional computational costs compared to efficient MD-SCF. This new method enables realistic simulations of large-scale systems that are needed to investigate the applications of self-assembled structures of lipids in nanotechnologies.
Scientific Visualization and Simulation for Multi-dimensional Marine Environment Data
NASA Astrophysics Data System (ADS)
Su, T.; Liu, H.; Wang, W.; Song, Z.; Jia, Z.
2017-12-01
As higher attention on the ocean and rapid development of marine detection, there are increasingly demands for realistic simulation and interactive visualization of marine environment in real time. Based on advanced technology such as GPU rendering, CUDA parallel computing and rapid grid oriented strategy, a series of efficient and high-quality visualization methods, which can deal with large-scale and multi-dimensional marine data in different environmental circumstances, has been proposed in this paper. Firstly, a high-quality seawater simulation is realized by FFT algorithm, bump mapping and texture animation technology. Secondly, large-scale multi-dimensional marine hydrological environmental data is virtualized by 3d interactive technologies and volume rendering techniques. Thirdly, seabed terrain data is simulated with improved Delaunay algorithm, surface reconstruction algorithm, dynamic LOD algorithm and GPU programming techniques. Fourthly, seamless modelling in real time for both ocean and land based on digital globe is achieved by the WebGL technique to meet the requirement of web-based application. The experiments suggest that these methods can not only have a satisfying marine environment simulation effect, but also meet the rendering requirements of global multi-dimension marine data. Additionally, a simulation system for underwater oil spill is established by OSG 3D-rendering engine. It is integrated with the marine visualization method mentioned above, which shows movement processes, physical parameters, current velocity and direction for different types of deep water oil spill particle (oil spill particles, hydrates particles, gas particles, etc.) dynamically and simultaneously in multi-dimension. With such application, valuable reference and decision-making information can be provided for understanding the progress of oil spill in deep water, which is helpful for ocean disaster forecasting, warning and emergency response.
Scalable Methods for Eulerian-Lagrangian Simulation Applied to Compressible Multiphase Flows
NASA Astrophysics Data System (ADS)
Zwick, David; Hackl, Jason; Balachandar, S.
2017-11-01
Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian method. While useful, this method can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and apply it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the methods presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.
An elementary singularity-free Rotational Brownian Dynamics algorithm for anisotropic particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilie, Ioana M.; Briels, Wim J.; MESA+ Institute for Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede
2015-03-21
Brownian Dynamics is the designated technique to simulate the collective dynamics of colloidal particles suspended in a solution, e.g., the self-assembly of patchy particles. Simulating the rotational dynamics of anisotropic particles by a first-order Langevin equation, however, gives rise to a number of complications, ranging from singularities when using a set of three rotational coordinates to subtle metric and drift corrections. Here, we derive and numerically validate a quaternion-based Rotational Brownian Dynamics algorithm that handles these complications in a simple and elegant way. The extension to hydrodynamic interactions is also discussed.
NASA Astrophysics Data System (ADS)
Welch, Dale; Font, Gabriel; Mitchell, Robert; Rose, David
2017-10-01
We report on particle-in-cell developments of the study of the Compact Fusion Reactor. Millisecond, two and three-dimensional simulations (cubic meter volume) of confinement and neutral beam heating of the magnetic confinement device requires accurate representation of the complex orbits, near perfect energy conservation, and significant computational power. In order to determine initial plasma fill and neutral beam heating, these simulations include ionization, elastic and charge exchange hydrogen reactions. To this end, we are pursuing fast electromagnetic kinetic modeling algorithms including a two implicit techniques and a hybrid quasi-neutral algorithm with kinetic ions. The kinetic modeling includes use of the Poisson-corrected direct implicit, magnetic implicit, as well as second-order cloud-in-cell techniques. The hybrid algorithm, ignoring electron inertial effects, is two orders of magnitude faster than kinetic but not as accurate with respect to confinement. The advantages and disadvantages of these techniques will be presented. Funded by Lockheed Martin.
Monte Carlo analysis of tagged neutron beams for cargo container inspection.
Pesente, S; Lunardon, M; Nebbia, G; Viesti, G; Sudac, D; Valkovic, V
2007-12-01
Fast neutrons produced via D+T reactions and tagged by the associated particle technique have been recently proposed to inspect cargo containers. The general characteristics of this technique are studied with Monte Carlo simulations by determining the properties of the tagged neutron beams as a function of the relevant design parameters (energy and size of the deuteron beam, geometry of the charged particle detector). Results from simulations, validated by experiments, show that the broadening of the correlation between the alpha-particle and the neutron, induced by kinematical as well as geometrical (beam and detector size) effects, is important and limits the dimension of the minimum voxel to be inspected. Moreover, the effect of the container filling is explored. The material filling produces a sizeable loss of correlation between alpha-particles and neutrons due to scattering and absorption. Conditions in inspecting cargo containers are discussed.
NASA Astrophysics Data System (ADS)
Hassan, M. A.; Mahmoodian, Reza; Hamdi, M.
2014-01-01
A modified smoothed particle hydrodynamic (MSPH) computational technique was utilized to simulate molten particle motion and infiltration speed on multi-scale analysis levels. The radial velocity and velocity gradient of molten alumina, iron infiltration in the TiC product and solidification rate, were predicted during centrifugal self-propagating high-temperature synthesis (SHS) simulation, which assisted the coating process by MSPH. The effects of particle size and temperature on infiltration and solidification of iron and alumina were mainly investigated. The obtained results were validated with experimental microstructure evidence. The simulation model successfully describes the magnitude of iron and alumina diffusion in a centrifugal thermite SHS and Ti + C hybrid reaction under centrifugal acceleration.
Hassan, M. A.; Mahmoodian, Reza; Hamdi, M.
2014-01-01
A modified smoothed particle hydrodynamic (MSPH) computational technique was utilized to simulate molten particle motion and infiltration speed on multi-scale analysis levels. The radial velocity and velocity gradient of molten alumina, iron infiltration in the TiC product and solidification rate, were predicted during centrifugal self-propagating high-temperature synthesis (SHS) simulation, which assisted the coating process by MSPH. The effects of particle size and temperature on infiltration and solidification of iron and alumina were mainly investigated. The obtained results were validated with experimental microstructure evidence. The simulation model successfully describes the magnitude of iron and alumina diffusion in a centrifugal thermite SHS and Ti + C hybrid reaction under centrifugal acceleration. PMID:24430621
Hassan, M A; Mahmoodian, Reza; Hamdi, M
2014-01-16
A modified smoothed particle hydrodynamic (MSPH) computational technique was utilized to simulate molten particle motion and infiltration speed on multi-scale analysis levels. The radial velocity and velocity gradient of molten alumina, iron infiltration in the TiC product and solidification rate, were predicted during centrifugal self-propagating high-temperature synthesis (SHS) simulation, which assisted the coating process by MSPH. The effects of particle size and temperature on infiltration and solidification of iron and alumina were mainly investigated. The obtained results were validated with experimental microstructure evidence. The simulation model successfully describes the magnitude of iron and alumina diffusion in a centrifugal thermite SHS and Ti + C hybrid reaction under centrifugal acceleration.
NASA Astrophysics Data System (ADS)
Yi, Hou-Hui; Yang, Xiao-Feng; Wang, Cai-Feng; Li, Hua-Bing
2009-07-01
The rolling massage is one of the most important manipulations in Chinese massage, which is expected to eliminate many diseases. Here, the effect of the rolling massage on a pair of particles moving in blood vessels under rolling massage manipulation is studied by the lattice Boltzmann simulation. The simulated results show that the motion of each particle is considerably modified by the rolling massage, and it depends on the relative rolling velocity, the rolling depth, and the distance between particle position and rolling position. Both particles' translational average velocities increase almost linearly as the rolling velocity increases, and obey the same law. The increment of the average relative angular velocity for the leading particle is smaller than that of the trailing one. The result is helpful for understanding the mechanism of the massage and to further develop the rolling techniques.
Development and application of computational aerothermodynamics flowfield computer codes
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj
1993-01-01
Computations are presented for one-dimensional, strong shock waves that are typical of those that form in front of a reentering spacecraft. The fluid mechanics and thermochemistry are modeled using two different approaches. The first employs traditional continuum techniques in solving the Navier-Stokes equations. The second-approach employs a particle simulation technique (the direct simulation Monte Carlo method, DSMC). The thermochemical models employed in these two techniques are quite different. The present investigation presents an evaluation of thermochemical models for nitrogen under hypersonic flow conditions. Four separate cases are considered. The cases are governed, respectively, by the following: vibrational relaxation; weak dissociation; strong dissociation; and weak ionization. In near-continuum, hypersonic flow, the nonequilibrium thermochemical models employed in continuum and particle simulations produce nearly identical solutions. Further, the two approaches are evaluated successfully against available experimental data for weakly and strongly dissociating flows.
Cosmological Particle Data Compression in Practice
NASA Astrophysics Data System (ADS)
Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.
2017-12-01
In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.
NASA Astrophysics Data System (ADS)
Deyglun, Clément; Carasco, Cédric; Pérot, Bertrand
2014-06-01
The detection of Special Nuclear Materials (SNM) by neutron interrogation is extensively studied by Monte Carlo simulation at the Nuclear Measurement Laboratory of CEA Cadarache (French Alternative Energies and Atomic Energy Commission). The active inspection system is based on the Associated Particle Technique (APT). Fissions induced by tagged neutrons (i.e. correlated to an alpha particle in the DT neutron generator) in SNM produce high multiplicity coincidences which are detected with fast plastic scintillators. At least three particles are detected in a short time window following the alpha detection, whereas nonnuclear materials mainly produce single events, or pairs due to (n,2n) and (n,n'γ) reactions. To study the performances of an industrial cargo container inspection system, Monte Carlo simulations are performed with the MCNP-PoliMi transport code, which records for each neutron history the relevant information: reaction types, position and time of interactions, energy deposits, secondary particles, etc. The output files are post-processed with a specific tool developed with ROOT data analysis software. Particles not correlated with an alpha particle (random background), counting statistics, and time-energy resolutions of the data acquisition system are taken into account in the numerical model. Various matrix compositions, suspicious items, SNM shielding and positions inside the container, are simulated to assess the performances and limitations of an industrial system.
NASA Astrophysics Data System (ADS)
Furuichi, M.; Nishiura, D.
2015-12-01
Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our approach is suitable for solving the particles with different calculation costs (e.g. boundary particles) as well as the heterogeneous computer architecture. We analyze the parallel efficiency and scalability on the super computer systems (K-computer, Earth simulator 3, etc.).
Numerical and experimental approaches to simulate soil clogging in porous media
NASA Astrophysics Data System (ADS)
Kanarska, Yuliya; LLNL Team
2012-11-01
Failure of a dam by erosion ranks among the most serious accidents in civil engineering. The best way to prevent internal erosion is using adequate granular filters in the transition areas where important hydraulic gradients can appear. In case of cracking and erosion, if the filter is capable of retaining the eroded particles, the crack will seal and the dam safety will be ensured. A finite element numerical solution of the Navier-Stokes equations for fluid flow together with Lagrange multiplier technique for solid particles was applied to the simulation of soil filtration. The numerical approach was validated through comparison of numerical simulations with the experimental results of base soil particle clogging in the filter layers performed at ERDC. The numerical simulation correctly predicted flow and pressure decay due to particle clogging. The base soil particle distribution was almost identical to those measured in the laboratory experiment. To get more precise understanding of the soil transport in granular filters we investigated sensitivity of particle clogging mechanisms to various aspects such as particle size ration, the amplitude of hydraulic gradient, particle concentration and contact properties. By averaging the results derived from the grain-scale simulations, we investigated how those factors affect the semi-empirical multiphase model parameters in the large-scale simulation tool. The Department of Homeland Security Science and Technology Directorate provided funding for this research.
NASA Astrophysics Data System (ADS)
Yi, Hou-Hui; Fan, Li-Juan; Yang, Xiao-Feng; Chen, Yan-Yan
2008-09-01
The rolling massage manipulation is a classic Chinese massage, which is expected to eliminate many diseases. Here the effect of the rolling massage on the particle moving property in the blood vessels under the rolling massage manipulation is studied by the lattice Boltzmann simulation. The simulation results show that the particle moving behaviour depends on the rolling velocity, the distance between particle position and rolling position. The average values, including particle translational velocity and angular velocity, increase as the rolling velocity increases almost linearly. The result is helpful to understand the mechanism of the massage and develop the rolling techniques.
The new ATLAS Fast Calorimeter Simulation
NASA Astrophysics Data System (ADS)
Schaarschmidt, J.; ATLAS Collaboration
2017-10-01
Current and future need for large scale simulated samples motivate the development of reliable fast simulation techniques. The new Fast Calorimeter Simulation is an improved parameterized response of single particles in the ATLAS calorimeter that aims to accurately emulate the key features of the detailed calorimeter response as simulated with Geant4, yet approximately ten times faster. Principal component analysis and machine learning techniques are used to improve the performance and decrease the memory need compared to the current version of the ATLAS Fast Calorimeter Simulation. A prototype of this new Fast Calorimeter Simulation is in development and its integration into the ATLAS simulation infrastructure is ongoing.
NASA Technical Reports Server (NTRS)
Zamel, James M.; Petach, Michael; Gat, Nahum; Kropp, Jack; Luong, Christina; Wolff, Michael
1993-01-01
This report delineates the Option portion of the Phase A Gas-Grain Simulation Facility study. The conceptual design of a Gas-Grain Simulation Experiment Module (GGSEM) for Space Shuttle Middeck is discussed. In addition, a laboratory breadboard was developed during this study to develop a key function for the GGSEM and the GGSF, specifically, a solid particle cloud generating device. The breadboard design and test results are discussed and recommendations for further studies are included. The GGSEM is intended to fly on board a low earth orbit (LEO), manned platform. It will be used to perform a subset of the experiments planned for the GGSF for Space Station Freedom, as it can partially accommodate a number of the science experiments. The outcome of the experiments performed will provide an increased understanding of the operational requirements for the GGSF. The GGSEM will also act as a platform to accomplish technology development and proof-of-principle experiments for GGSF hardware, and to verify concepts and designs of hardware for GGSF. The GGSEM will allow assembled subsystems to be tested to verify facility level operation. The technology development that can be accommodated by the GGSEM includes: GGSF sample generation techniques, GGSF on-line diagnostics techniques, sample collection techniques, performance of various types of sensors for environmental monitoring, and some off-line diagnostics. Advantages and disadvantages of several LEO platforms available for GGSEM applications are identified and discussed. Several of the anticipated GGSF experiments require the deagglomeration and dispensing of dry solid particles into an experiment chamber. During the GGSF Phase A study, various techniques and devices available for the solid particle aerosol generator were reviewed. As a result of this review, solid particle deagglomeration and dispensing were identified as key undeveloped technologies in the GGSF design. A laboratory breadboard version of a solid particle generation system was developed and characterization tests performed. The breadboard hardware emulates the functions of the GGSF solid particle cloud generator in a ground laboratory environment, but with some modifications, can be used on other platforms.
NASA Astrophysics Data System (ADS)
Zamel, James M.; Petach, Michael; Gat, Nahum; Kropp, Jack; Luong, Christina; Wolff, Michael
1993-12-01
This report delineates the Option portion of the Phase A Gas-Grain Simulation Facility study. The conceptual design of a Gas-Grain Simulation Experiment Module (GGSEM) for Space Shuttle Middeck is discussed. In addition, a laboratory breadboard was developed during this study to develop a key function for the GGSEM and the GGSF, specifically, a solid particle cloud generating device. The breadboard design and test results are discussed and recommendations for further studies are included. The GGSEM is intended to fly on board a low earth orbit (LEO), manned platform. It will be used to perform a subset of the experiments planned for the GGSF for Space Station Freedom, as it can partially accommodate a number of the science experiments. The outcome of the experiments performed will provide an increased understanding of the operational requirements for the GGSF. The GGSEM will also act as a platform to accomplish technology development and proof-of-principle experiments for GGSF hardware, and to verify concepts and designs of hardware for GGSF. The GGSEM will allow assembled subsystems to be tested to verify facility level operation. The technology development that can be accommodated by the GGSEM includes: GGSF sample generation techniques, GGSF on-line diagnostics techniques, sample collection techniques, performance of various types of sensors for environmental monitoring, and some off-line diagnostics. Advantages and disadvantages of several LEO platforms available for GGSEM applications are identified and discussed. Several of the anticipated GGSF experiments require the de-agglomeration and dispensing of dry solid particles into an experiment chamber. During the GGSF Phase A study, various techniques and devices available for the solid particle aerosol generator were reviewed. As a result of this review, solid particle de-agglomeration and dispensing were identified as key undeveloped technologies in the GGSF design. A laboratory breadboard version of a solid particle generation system was developed and characterization tests performed. The breadboard hardware emulates the functions of the GGSF solid particle cloud generator in a ground laboratory environment, but with some modifications, can be used on other platforms.
NASA Technical Reports Server (NTRS)
Ellison, D. C.; Jones, F. C.; Eichler, D.
1981-01-01
A collisionless quasi-parallel shock is simulated by Monte Carlo techniques. The scattering of all velocity particles from thermal to high energy is assumed to occur so that the mean free path is directly proportional to velocity times the mass-to-charge-ratio, and inversely proporational to the plasma density. The shock profile and velocity spectra are obtained, showing preferential acceleration of high A/Z particles relative to protons. The inclusion of the back pressure of the scattering particles on the inflowing plasma produces a smoothing of the shock profile, which implies that the spectra are steeper than for a discontinuous shock.
Single Charged Particle Identification in Nuclear Emulsion Using Multiple Coulomb Scattering Method
NASA Astrophysics Data System (ADS)
Tint, Khin T.; Endo, Yoko; Hoshino, Kaoru; Ito, Hiroki; Itonaga, Kazunori; Kinbara, Shinji; Kobayashi, Hidetaka; Mishina, Akihiro; Soe, Myint K.; Yoshida, Junya; Nakazawa, Kazuma
Development of particle identification technique for single charged particles such as Ξ- hyperon, proton, K- and π- mesons is on-going by measuring multiple Coulomb scattering in nuclear emulsion. We generated several thousands of tracks of the single charged particles in nuclear emulsion stacks with GEANT 4 simulation and obtained second difference in constant Sagitta Method. We found that recognition of Ξ- hyperon from π- mesons is well satisfied, although that from K- and proton are a little difficult. On the other hand, the consistency of second difference of real Ξ- hyperon and pi meson tracks and simulation results were also confirmed.
A novel coupling of noise reduction algorithms for particle flow simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimoń, M.J., E-mail: malgorzata.zimon@stfc.ac.uk; James Weir Fluids Lab, Mechanical and Aerospace Engineering Department, The University of Strathclyde, Glasgow G1 1XJ; Reese, J.M.
2016-09-15
Proper orthogonal decomposition (POD) and its extension based on time-windows have been shown to greatly improve the effectiveness of recovering smooth ensemble solutions from noisy particle data. However, to successfully de-noise any molecular system, a large number of measurements still need to be provided. In order to achieve a better efficiency in processing time-dependent fields, we have combined POD with a well-established signal processing technique, wavelet-based thresholding. In this novel hybrid procedure, the wavelet filtering is applied within the POD domain and referred to as WAVinPOD. The algorithm exhibits promising results when applied to both synthetically generated signals and particlemore » data. In this work, the simulations compare the performance of our new approach with standard POD or wavelet analysis in extracting smooth profiles from noisy velocity and density fields. Numerical examples include molecular dynamics and dissipative particle dynamics simulations of unsteady force- and shear-driven liquid flows, as well as phase separation phenomenon. Simulation results confirm that WAVinPOD preserves the dimensionality reduction obtained using POD, while improving its filtering properties through the sparse representation of data in wavelet basis. This paper shows that WAVinPOD outperforms the other estimators for both synthetically generated signals and particle-based measurements, achieving a higher signal-to-noise ratio from a smaller number of samples. The new filtering methodology offers significant computational savings, particularly for multi-scale applications seeking to couple continuum informations with atomistic models. It is the first time that a rigorous analysis has compared de-noising techniques for particle-based fluid simulations.« less
Computer modeling of test particle acceleration at oblique shocks
NASA Technical Reports Server (NTRS)
Decker, Robert B.
1988-01-01
The present evaluation of the basic techniques and illustrative results of charged particle-modeling numerical codes suitable for particle acceleration at oblique, fast-mode collisionless shocks emphasizes the treatment of ions as test particles, calculating particle dynamics through numerical integration along exact phase-space orbits. Attention is given to the acceleration of particles at planar, infinitessimally thin shocks, as well as to plasma simulations in which low-energy ions are injected and accelerated at quasi-perpendicular shocks with internal structure.
NASA Technical Reports Server (NTRS)
Sulkanen, Martin E.; Borovsky, Joseph E.
1992-01-01
The study of relativistic plasma double layers is described through the solution of the one-dimensional, unmagnetized, steady-state Poisson-Vlasov equations and by means of one-dimensional, unmagnetized, particle-in-cell simulations. The thickness vs potential-drop scaling law is extended to relativistic potential drops and relativistic plasma temperatures. The transition in the scaling law for 'strong' double layers suggested by analytical two-beam models by Carlqvist (1982) is confirmed, and causality problems of standard double-layer simulation techniques applied to relativistic plasma systems are discussed.
NASA Astrophysics Data System (ADS)
Wang, Qing; Zhao, Xinyu; Ihme, Matthias
2017-11-01
Particle-laden turbulent flows are important in numerous industrial applications, such as spray combustion engines, solar energy collectors etc. It is of interests to study this type of flows numerically, especially using large-eddy simulations (LES). However, capturing the turbulence-particle interaction in LES remains challenging due to the insufficient representation of the effect of sub-grid scale (SGS) dispersion. In the present work, a closure technique for the SGS dispersion using regularized deconvolution method (RDM) is assessed. RDM was proposed as the closure for the SGS dispersion in a counterflow spray that is studied numerically using finite difference method on a structured mesh. A presumed form of LES filter is used in the simulations. In the present study, this technique has been extended to finite volume method with an unstructured mesh, where no presumption on the filter form is required. The method is applied to a series of particle-laden turbulent jets. Parametric analyses of the model performance are conducted for flows with different Stokes numbers and Reynolds numbers. The results from LES will be compared against experiments and direct numerical simulations (DNS).
Modeling of ion acceleration through drift and diffusion at interplanetary shocks
NASA Technical Reports Server (NTRS)
Decker, R. B.; Vlahos, L.
1986-01-01
A test particle simulation designed to model ion acceleration through drift and diffusion at interplanetary shocks is described. The technique consists of integrating along exact particle orbits in a system where the angle between the shock normal and mean upstream magnetic field, the level of magnetic fluctuations, and the energy of injected particles can assume a range of values. The technique makes it possible to study time-dependent shock acceleration under conditions not amenable to analytical techniques. To illustrate the capability of the numerical model, proton acceleration was considered under conditions appropriate for interplanetary shocks at 1 AU, including large-amplitude transverse magnetic fluctuations derived from power spectra of both ambient and shock-associated MHD waves.
Molecular Dynamics Simulations of Carbon Nanotubes in Water
NASA Technical Reports Server (NTRS)
Walther, J. H.; Jaffe, R.; Halicioglu, T.; Koumoutsakos, P.
2000-01-01
We study the hydrophobic/hydrophilic behavior of carbon nanotubes using molecular dynamics simulations. The energetics of the carbon-water interface are mainly dispersive but in the present study augmented with a carbon quadrupole term acting on the charge sites of the water. The simulations indicate that this contribution is negligible in terms of modifying the structural properties of water at the interface. Simulations of two carbon nanotubes in water display a wetting and drying of the interface between the nanotubes depending on their initial spacing. Thus, initial tube spacings of 7 and 8 A resulted in a drying of the interface whereas spacing of > 9 A remain wet during the course of the simulation. Finally, we present a novel particle-particle-particle-mesh algorithm for long range potentials which allows for general (curvilinear) meshes and "black-box" fast solvers by adopting an influence matrix technique.
Freezing Transition Studies Through Constrained Cell Model Simulation
NASA Astrophysics Data System (ADS)
Nayhouse, Michael; Kwon, Joseph Sang-Il; Heng, Vincent R.; Amlani, Ankur M.; Orkoulas, G.
2014-10-01
In the present work, a simulation method based on cell models is used to deduce the fluid-solid transition of a system of particles that interact via a pair potential, , which is of the form with . The simulations are implemented under constant-pressure conditions on a generalized version of the constrained cell model. The constrained cell model is constructed by dividing the volume into Wigner-Seitz cells and confining each particle in a single cell. This model is a special case of a more general cell model which is formed by introducing an additional field variable that controls the number of particles per cell and, thus, the relative stability of the solid against the fluid phase. High field values force configurations with one particle per cell and thus favor the solid phase. Fluid-solid coexistence on the isotherm that corresponds to a reduced temperature of 2 is determined from constant-pressure simulations of the generalized cell model using tempering and histogram reweighting techniques. The entire fluid-solid phase boundary is determined through a thermodynamic integration technique based on histogram reweighting, using the previous coexistence point as a reference point. The vapor-liquid phase diagram is obtained from constant-pressure simulations of the unconstrained system using tempering and histogram reweighting. The phase diagram of the system is found to contain a stable critical point and a triple point. The phase diagram of the corresponding constrained cell model is also found to contain both a stable critical point and a triple point.
Modeling and Simulation of Cardiogenic Embolic Particle Transport to the Brain
NASA Astrophysics Data System (ADS)
Mukherjee, Debanjan; Jani, Neel; Shadden, Shawn C.
2015-11-01
Emboli are aggregates of cells, proteins, or fatty material, which travel along arteries distal to the point of their origin, and can potentially block blood flow to the brain, causing stroke. This is a prominent mechanism of stroke, accounting for about a third of all cases, with the heart being a prominent source of these emboli. This work presents our investigations towards developing numerical simulation frameworks for modeling the transport of embolic particles originating from the heart along the major arteries supplying the brain. The simulations are based on combining discrete particle method with image based computational fluid dynamics. Simulations of unsteady, pulsatile hemodynamics, and embolic particle transport within patient-specific geometries, with physiological boundary conditions, are presented. The analysis is focused on elucidating the distribution of particles, transport of particles in the head across the major cerebral arteries connected at the Circle of Willis, the role of hemodynamic variables on the particle trajectories, and the effect of considering one-way vs. two-way coupling methods for the particle-fluid momentum exchange. These investigations are aimed at advancing our understanding of embolic stroke using computational fluid dynamics techniques. This research was supported by the American Heart Association grant titled ``Embolic Stroke: Anatomic and Physiologic Insights from Image-Based CFD.''
NASA Astrophysics Data System (ADS)
Krimi, Abdelkader; Rezoug, Mehdi; Khelladi, Sofiane; Nogueira, Xesús; Deligant, Michael; Ramírez, Luis
2018-04-01
In this work, a consistent Smoothed Particle Hydrodynamics (SPH) model to deal with interfacial multiphase fluid flows simulation is proposed. A modification to the Continuum Stress Surface formulation (CSS) [1] to enhance the stability near the fluid interface is developed in the framework of the SPH method. A non-conservative first-order consistency operator is used to compute the divergence of stress surface tensor. This formulation benefits of all the advantages of the one proposed by Adami et al. [2] and, in addition, it can be applied to more than two phases fluid flow simulations. Moreover, the generalized wall boundary conditions [3] are modified in order to be well adapted to multiphase fluid flows with different density and viscosity. In order to allow the application of this technique to wall-bounded multiphase flows, a modification of generalized wall boundary conditions is presented here for using the SPH method. In this work we also present a particle redistribution strategy as an extension of the damping technique presented in [3] to smooth the initial transient phase of gravitational multiphase fluid flow simulations. Several computational tests are investigated to show the accuracy, convergence and applicability of the proposed SPH interfacial multiphase model.
NASA Astrophysics Data System (ADS)
Mutabaruka, Patrick; Kamrin, Ken
2018-04-01
A numerical method for particle-laden fluids interacting with a deformable solid domain and mobile rigid parts is proposed and implemented in a full engineering system. The fluid domain is modeled with a lattice Boltzmann representation, the particles and rigid parts are modeled with a discrete element representation, and the deformable solid domain is modeled using a Lagrangian mesh. The main issue of this work, since separately each of these methods is a mature tool, is to develop coupling and model-reduction approaches in order to efficiently simulate coupled problems of this nature, as in various geological and engineering applications. The lattice Boltzmann method incorporates a large eddy simulation technique using the Smagorinsky turbulence model. The discrete element method incorporates spherical and polyhedral particles for stiff contact interactions. A neo-Hookean hyperelastic model is used for the deformable solid. We provide a detailed description of how to couple the three solvers within a unified algorithm. The technique we propose for rubber modeling/coupling exploits a simplification that prevents having to solve a finite-element problem at each time step. We also developed a technique to reduce the domain size of the full system by replacing certain zones with quasi-analytic solutions, which act as effective boundary conditions for the lattice Boltzmann method. The major ingredients of the routine are separately validated. To demonstrate the coupled method in full, we simulate slurry flows in two kinds of piston valve geometries. The dynamics of the valve and slurry are studied and reported over a large range of input parameters.
Automated detection and analysis of particle beams in laser-plasma accelerator simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushizima, Daniela Mayumi; Geddes, C.G.; Cormier-Michel, E.
Numerical simulations of laser-plasma wakefield (particle) accelerators model the acceleration of electrons trapped in plasma oscillations (wakes) left behind when an intense laser pulse propagates through the plasma. The goal of these simulations is to better understand the process involved in plasma wake generation and how electrons are trapped and accelerated by the wake. Understanding of such accelerators, and their development, offer high accelerating gradients, potentially reducing size and cost of new accelerators. One operating regime of interest is where a trapped subset of electrons loads the wake and forms an isolated group of accelerated particles with low spread inmore » momentum and position, desirable characteristics for many applications. The electrons trapped in the wake may be accelerated to high energies, the plasma gradient in the wake reaching up to a gigaelectronvolt per centimeter. High-energy electron accelerators power intense X-ray radiation to terahertz sources, and are used in many applications including medical radiotherapy and imaging. To extract information from the simulation about the quality of the beam, a typical approach is to examine plots of the entire dataset, visually determining the adequate parameters necessary to select a subset of particles, which is then further analyzed. This procedure requires laborious examination of massive data sets over many time steps using several plots, a routine that is unfeasible for large data collections. Demand for automated analysis is growing along with the volume and size of simulations. Current 2D LWFA simulation datasets are typically between 1GB and 100GB in size, but simulations in 3D are of the order of TBs. The increase in the number of datasets and dataset sizes leads to a need for automatic routines to recognize particle patterns as particle bunches (beam of electrons) for subsequent analysis. Because of the growth in dataset size, the application of machine learning techniques for scientific data mining is increasingly considered. In plasma simulations, Bagherjeiran et al. presented a comprehensive report on applying graph-based techniques for orbit classification. They used the KAM classifier to label points and components in single and multiple orbits. Love et al. conducted an image space analysis of coherent structures in plasma simulations. They used a number of segmentation and region-growing techniques to isolate regions of interest in orbit plots. Both approaches analyzed particle accelerator data, targeting the system dynamics in terms of particle orbits. However, they did not address particle dynamics as a function of time or inspected the behavior of bunches of particles. Ruebel et al. addressed the visual analysis of massive laser wakefield acceleration (LWFA) simulation data using interactive procedures to query the data. Sophisticated visualization tools were provided to inspect the data manually. Ruebel et al. have integrated these tools to the visualization and analysis system VisIt, in addition to utilizing efficient data management based on HDF5, H5Part, and the index/query tool FastBit. In Ruebel et al. proposed automatic beam path analysis using a suite of methods to classify particles in simulation data and to analyze their temporal evolution. To enable researchers to accurately define particle beams, the method computes a set of measures based on the path of particles relative to the distance of the particles to a beam. To achieve good performance, this framework uses an analysis pipeline designed to quickly reduce the amount of data that needs to be considered in the actual path distance computation. As part of this process, region-growing methods are utilized to detect particle bunches at single time steps. Efficient data reduction is essential to enable automated analysis of large data sets as described in the next section, where data reduction methods are steered to the particular requirements of our clustering analysis. Previously, we have described the application of a set of algorithms to automate the data analysis and classification of particle beams in the LWFA simulation data, identifying locations with high density of high energy particles. These algorithms detected high density locations (nodes) in each time step, i.e. maximum points on the particle distribution for only one spatial variable. Each node was correlated to a node in previous or later time steps by linking these nodes according to a pruned minimum spanning tree (PMST). We call the PMST representation 'a lifetime diagram', which is a graphical tool to show temporal information of high dense groups of particles in the longitudinal direction for the time series. Electron bunch compactness was described by another step of the processing, designed to partition each time step, using fuzzy clustering, into a fixed number of clusters.« less
NASA Astrophysics Data System (ADS)
Hartman, John; Kirby, Brian
2017-03-01
Nanoparticle tracking analysis, a multiprobe single particle tracking technique, is a widely used method to quickly determine the concentration and size distribution of colloidal particle suspensions. Many popular tools remove non-Brownian components of particle motion by subtracting the ensemble-average displacement at each time step, which is termed dedrifting. Though critical for accurate size measurements, dedrifting is shown here to introduce significant biasing error and can fundamentally limit the dynamic range of particle size that can be measured for dilute heterogeneous suspensions such as biological extracellular vesicles. We report a more accurate estimate of particle mean-square displacement, which we call decorrelation analysis, that accounts for correlations between individual and ensemble particle motion, which are spuriously introduced by dedrifting. Particle tracking simulation and experimental results show that this approach more accurately determines particle diameters for low-concentration polydisperse suspensions when compared with standard dedrifting techniques.
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-09-01
Exascale-level simulations require fault-resilient algorithms that are robust against repeated and expected software and/or hardware failures during computations, which may render the simulation results unsatisfactory. If each processor can share some global information about the simulation from a coarse, limited accuracy but relatively costless auxiliary simulator we can effectively fill-in the missing spatial data at the required times by a statistical learning technique - multi-level Gaussian process regression, on the fly; this has been demonstrated in previous work [1]. Based on the previous work, we also employ another (nonlinear) statistical learning technique, Diffusion Maps, that detects computational redundancy in time and hence accelerate the simulation by projective time integration, giving the overall computation a "patch dynamics" flavor. Furthermore, we are now able to perform information fusion with multi-fidelity and heterogeneous data (including stochastic data). Finally, we set the foundations of a new framework in CFD, called patch simulation, that combines information fusion techniques from, in principle, multiple fidelity and resolution simulations (and even experiments) with a new adaptive timestep refinement technique. We present two benchmark problems (the heat equation and the Navier-Stokes equations) to demonstrate the new capability that statistical learning tools can bring to traditional scientific computing algorithms. For each problem, we rely on heterogeneous and multi-fidelity data, either from a coarse simulation of the same equation or from a stochastic, particle-based, more "microscopic" simulation. We consider, as such "auxiliary" models, a Monte Carlo random walk for the heat equation and a dissipative particle dynamics (DPD) model for the Navier-Stokes equations. More broadly, in this paper we demonstrate the symbiotic and synergistic combination of statistical learning, domain decomposition, and scientific computing in exascale simulations.
NASA Astrophysics Data System (ADS)
Yoon, J. S.; Culligan, P. J.; Germaine, J. T.
2003-12-01
Subsurface colloid behavior has recently drawn attention because colloids are suspected of enhancing contaminant transport in groundwater systems. To better understand the processes by which colloids move through the subsurface, and in particular the vadose zone, a new technique that enables real-time visualization of colloid particles as they move through a porous medium has been developed. This visualization technique involves the use of laser induced fluorescent particles and digital image processing to directly observe particles moving through a porous medium consisting of soda-lime glass beads and water in a transparent experimental box of 10.0cm\\x9D27.9cm\\x9D2.38cm. Colloid particles are simulated using commercially available micron sized particles that fluoresce under argon-ion laser light. The fluorescent light given off from the particles is captured through a camera filter, which lets through only the emitted wavelength of the colloid particles. The intensity of the emitted light is proportional to the colloid particle concentration. The images of colloid movement are captured by a MagnaFire digital camera; a cooled CCD digital camera produced by Optronics. This camera enables real-time capture of images to a computer, thereby allowing the images to be processed immediately. The images taken by the camera are analyzed by the ImagePro software from Media Cybernetics, which contains a range of counting, sizing, measuring, and image enhancement tools for image processing. Laboratory experiments using the new technique have demonstrated the existence of both irreversible and reversible sites for colloid entrapment during uniform saturated flow in a homogeneous porous medium. These tests have also shown a dependence of colloid entrapment on velocity. Models for colloid transport currently available in the literature have proven to be inadequate predictors for the experimental observations, despite the simplicity of the system studied. To further extend the work, the visualization technique has been developed for use on the geo-centrifuge. The advantage that the geo-centrifuge has for investigating subsurface colloid behavior, is the ability to simulate unsaturated transport mechanisms under well simulated field moisture profiles and in shortened periods of time. A series of tests to investigate colloid transport during uniform saturated flow is being used to examine basic scaling laws for colloid transport under enhanced gravity. The paper will describe the new visualization technique, its use in geo-centrifuge testing and observations on scaling relationships for colloid transport during geo-centrifuge experiments. Although the visualization technique has been developed for investigating subsurface colloid behavior, it does have application in other areas of investigation, including the investigation of microbial behavior in the subsurface.
NASA Astrophysics Data System (ADS)
Crivoi, A.; Zhong, X.; Duan, Fei
2015-09-01
The coffee-ring effect for particle deposition near the three-phase line after drying a pinned sessile colloidal droplet has been suppressed or attenuated in many recent studies. However, there have been few attempts to simulate the mitigation of the effect in the presence of strong particle-particle attraction forces. We develop a three-dimensional stochastic model to investigate the drying process of a pinned colloidal sessile droplet by considering the sticking between particles, which was observed in the experiments. The Monte Carlo simulation results show that by solely promoting the particle-particle attraction in the model, the final deposit shape is transformed from the coffee ring to the uniform film deposition. This phenomenon is modeled using the colloidal aggregation technique and explained by the "Tetris principle," meaning that unevenly shaped or branched particle clusters rapidly build up a sparse structure spanning throughout the entire domain in the drying process. The influence of the controlled parameters is analyzed as well. The simulation is reflected by the drying patterns of the nanofluid droplets through the surfactant control in the experiments.
Particle-in-cell numerical simulations of a cylindrical Hall thruster with permanent magnets
NASA Astrophysics Data System (ADS)
Miranda, Rodrigo A.; Martins, Alexandre A.; Ferreira, José L.
2017-10-01
The cylindrical Hall thruster (CHT) is a propulsion device that offers high propellant utilization and performance at smaller dimensions and lower power levels than traditional Hall thrusters. In this paper we present first results of a numerical model of a CHT. This model solves particle and field dynamics self-consistently using a particle-in-cell approach. We describe a number of techniques applied to reduce the execution time of the numerical simulations. The specific impulse and thrust computed from our simulations are in agreement with laboratory experiments. This simplified model will allow for a detailed analysis of different thruster operational parameters and obtain an optimal configuration to be implemented at the Plasma Physics Laboratory at the University of Brasília.
Del Bello, Elisabetta; Taddeucci, Jacopo; de’ Michieli Vitturi, Mattia; Scarlato, Piergiorgio; Andronico, Daniele; Scollo, Simona; Kueppers, Ulrich; Ricci, Tullio
2017-01-01
Most of the current ash transport and dispersion models neglect particle-fluid (two-way) and particle-fluid plus particle-particle (four-way) reciprocal interactions during particle fallout from volcanic plumes. These interactions, a function of particle concentration in the plume, could play an important role, explaining, for example, discrepancies between observed and modelled ash deposits. Aiming at a more accurate prediction of volcanic ash dispersal and sedimentation, the settling of ash particles at particle volume fractions (ϕp) ranging 10−7-10−3 was performed in laboratory experiments and reproduced by numerical simulations that take into account first the two-way and then the four-way coupling. Results show that the velocity of particles settling together can exceed the velocity of particles settling individually by up to 4 times for ϕp ~ 10−3. Comparisons between experimental and simulation results reveal that, during the sedimentation process, the settling velocity is largely enhanced by particle-fluid interactions but partly hindered by particle-particle interactions with increasing ϕp. Combining the experimental and numerical results, we provide an empirical model allowing correction of the settling velocity of particles of any size, density, and shape, as a function of ϕp. These corrections will impact volcanic plume modelling results as well as remote sensing retrieval techniques for plume parameters. PMID:28045056
Del Bello, Elisabetta; Taddeucci, Jacopo; De' Michieli Vitturi, Mattia; Scarlato, Piergiorgio; Andronico, Daniele; Scollo, Simona; Kueppers, Ulrich; Ricci, Tullio
2017-01-03
Most of the current ash transport and dispersion models neglect particle-fluid (two-way) and particle-fluid plus particle-particle (four-way) reciprocal interactions during particle fallout from volcanic plumes. These interactions, a function of particle concentration in the plume, could play an important role, explaining, for example, discrepancies between observed and modelled ash deposits. Aiming at a more accurate prediction of volcanic ash dispersal and sedimentation, the settling of ash particles at particle volume fractions (ϕ p ) ranging 10 -7 -10 -3 was performed in laboratory experiments and reproduced by numerical simulations that take into account first the two-way and then the four-way coupling. Results show that the velocity of particles settling together can exceed the velocity of particles settling individually by up to 4 times for ϕ p ~ 10 -3 . Comparisons between experimental and simulation results reveal that, during the sedimentation process, the settling velocity is largely enhanced by particle-fluid interactions but partly hindered by particle-particle interactions with increasing ϕ p . Combining the experimental and numerical results, we provide an empirical model allowing correction of the settling velocity of particles of any size, density, and shape, as a function of ϕ p . These corrections will impact volcanic plume modelling results as well as remote sensing retrieval techniques for plume parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, Scott A.; Mendoza, Hector; Brunini, Victor E.
Battery performance, while observed at the macroscale, is primarily governed by the bicontinuous mesoscale network of the active particles and a polymeric conductive binder in its electrodes. Manufacturing processes affect this mesostructure, and therefore battery performance, in ways that are not always clear outside of empirical relationships. Directly studying the role of the mesostructure is difficult due to the small particle sizes (a few microns) and large mesoscale structures. Mesoscale simulation, however, is an emerging technique that allows the investigation into how particle-scale phenomena affect electrode behavior. In this manuscript, we discuss our computational approach for modeling electrochemical, mechanical, andmore » thermal phenomena of lithium-ion batteries at the mesoscale. Here, we review our recent and ongoing simulation investigations and discuss a path forward for additional simulation insights.« less
Roberts, Scott A.; Mendoza, Hector; Brunini, Victor E.; ...
2016-10-20
Battery performance, while observed at the macroscale, is primarily governed by the bicontinuous mesoscale network of the active particles and a polymeric conductive binder in its electrodes. Manufacturing processes affect this mesostructure, and therefore battery performance, in ways that are not always clear outside of empirical relationships. Directly studying the role of the mesostructure is difficult due to the small particle sizes (a few microns) and large mesoscale structures. Mesoscale simulation, however, is an emerging technique that allows the investigation into how particle-scale phenomena affect electrode behavior. In this manuscript, we discuss our computational approach for modeling electrochemical, mechanical, andmore » thermal phenomena of lithium-ion batteries at the mesoscale. Here, we review our recent and ongoing simulation investigations and discuss a path forward for additional simulation insights.« less
NASA Astrophysics Data System (ADS)
Cui, Z.; Welty, C.; Maxwell, R. M.
2011-12-01
Lagrangian, particle-tracking models are commonly used to simulate solute advection and dispersion in aquifers. They are computationally efficient and suffer from much less numerical dispersion than grid-based techniques, especially in heterogeneous and advectively-dominated systems. Although particle-tracking models are capable of simulating geochemical reactions, these reactions are often simplified to first-order decay and/or linear, first-order kinetics. Nitrogen transport and transformation in aquifers involves both biodegradation and higher-order geochemical reactions. In order to take advantage of the particle-tracking approach, we have enhanced an existing particle-tracking code SLIM-FAST, to simulate nitrogen transport and transformation in aquifers. The approach we are taking is a hybrid one: the reactive multispecies transport process is operator split into two steps: (1) the physical movement of the particles including the attachment/detachment to solid surfaces, which is modeled by a Lagrangian random-walk algorithm; and (2) multispecies reactions including biodegradation are modeled by coupling multiple Monod equations with other geochemical reactions. The coupled reaction system is solved by an ordinary differential equation solver. In order to solve the coupled system of equations, after step 1, the particles are converted to grid-based concentrations based on the mass and position of the particles, and after step 2 the newly calculated concentration values are mapped back to particles. The enhanced particle-tracking code is capable of simulating subsurface nitrogen transport and transformation in a three-dimensional domain with variably saturated conditions. Potential application of the enhanced code is to simulate subsurface nitrogen loading to the Chesapeake Bay and its tributaries. Implementation details, verification results of the enhanced code with one-dimensional analytical solutions and other existing numerical models will be presented in addition to a discussion of implementation challenges.
Simulating Sand Behavior through Terrain Subdivision and Particle Refinement
NASA Astrophysics Data System (ADS)
Clothier, M.
2013-12-01
Advances in computer graphics, GPUs, and parallel processing hardware have provided researchers with new methods to visualize scientific data. In fact, these advances have spurred new research opportunities between computer graphics and other disciplines, such as Earth sciences. Through collaboration, Earth and planetary scientists have benefited by using these advances in hardware technology to process large amounts of data for visualization and analysis. At Oregon State University, we are collaborating with the Oregon Space Grant and IGERT Ecosystem Informatics programs to investigate techniques for simulating the behavior of sand. In addition, we have also been collaborating with the Jet Propulsion Laboratory's DARTS Lab to exchange ideas on our research. The DARTS Lab specializes in the simulation of planetary vehicles, such as the Mars rovers. One aspect of their work is testing these vehicles in a virtual "sand box" to test their performance in different environments. Our research builds upon this idea to create a sand simulation framework to allow for more complex and diverse environments. As a basis for our framework, we have focused on planetary environments, such as the harsh, sandy regions on Mars. To evaluate our framework, we have used simulated planetary vehicles, such as a rover, to gain insight into the performance and interaction between the surface sand and the vehicle. Unfortunately, simulating the vast number of individual sand particles and their interaction with each other has been a computationally complex problem in the past. However, through the use of high-performance computing, we have developed a technique to subdivide physically active terrain regions across a large landscape. To achieve this, we only subdivide terrain regions where sand particles are actively participating with another object or force, such as a rover wheel. This is similar to a Level of Detail (LOD) technique, except that the density of subdivisions are determined by their proximity to the interacting object or force with the sand. To illustrate an example, as a rover wheel moves forward and approaches a particular sand region, that region will continue to subdivide until individual sand particles are represented. Conversely, if the rover wheel moves away, previously subdivided sand regions will recombine. Thus, individual sand particles are available when an interacting force is present but stored away if there is not. As such, this technique allows for many particles to be represented without the computational complexity. We have also further generalized these subdivision regions in our sand framework into any volumetric area suitable for use in the simulation. This allows for more compact subdivision regions and has fine-tuned our framework so that more emphasis can be placed on regions of actively participating sand. We feel that this increases the framework's usefulness across scientific applications and can provide for other research opportunities within the earth and planetary sciences. Through continued collaboration with our academic partners, we continue to build upon our sand simulation framework and look for other opportunities to utilize this research.
Monte Carlo simulations of particle acceleration at oblique shocks
NASA Technical Reports Server (NTRS)
Baring, Matthew G.; Ellison, Donald C.; Jones, Frank C.
1994-01-01
The Fermi shock acceleration mechanism may be responsible for the production of high-energy cosmic rays in a wide variety of environments. Modeling of this phenomenon has largely focused on plane-parallel shocks, and one of the most promising techniques for its study is the Monte Carlo simulation of particle transport in shocked fluid flows. One of the principal problems in shock acceleration theory is the mechanism and efficiency of injection of particles from the thermal gas into the accelerated population. The Monte Carlo technique is ideally suited to addressing the injection problem directly, and previous applications of it to the quasi-parallel Earth bow shock led to very successful modeling of proton and heavy ion spectra, as well as other observed quantities. Recently this technique has been extended to oblique shock geometries, in which the upstream magnetic field makes a significant angle Theta(sub B1) to the shock normal. Spectral resutls from test particle Monte Carlo simulations of cosmic-ray acceleration at oblique, nonrelativistic shocks are presented. The results show that low Mach number shocks have injection efficiencies that are relatively insensitive to (though not independent of) the shock obliquity, but that there is a dramatic drop in efficiency for shocks of Mach number 30 or more as the obliquity increases above 15 deg. Cosmic-ray distributions just upstream of the shock reveal prominent bumps at energies below the thermal peak; these disappear far upstream but might be observable features close to astrophysical shocks.
NASA Astrophysics Data System (ADS)
Shaw, Leah B.; Sethna, James P.; Lee, Kelvin H.
2004-08-01
The process of protein synthesis in biological systems resembles a one-dimensional driven lattice gas in which the particles (ribosomes) have spatial extent, covering more than one lattice site. Realistic, nonuniform gene sequences lead to quenched disorder in the particle hopping rates. We study the totally asymmetric exclusion process with large particles and quenched disorder via several mean-field approaches and compare the mean-field results with Monte Carlo simulations. Mean-field equations obtained from the literature are found to be reasonably effective in describing this system. A numerical technique is developed for computing the particle current rapidly. The mean-field approach is extended to include two-point correlations between adjacent sites. The two-point results are found to match Monte Carlo simulations more closely.
Lagrangian transported MDF methods for compressible high speed flows
NASA Astrophysics Data System (ADS)
Gerlinger, Peter
2017-06-01
This paper deals with the application of thermochemical Lagrangian MDF (mass density function) methods for compressible sub- and supersonic RANS (Reynolds Averaged Navier-Stokes) simulations. A new approach to treat molecular transport is presented. This technique on the one hand ensures numerical stability of the particle solver in laminar regions of the flow field (e.g. in the viscous sublayer) and on the other hand takes differential diffusion into account. It is shown in a detailed analysis, that the new method correctly predicts first and second-order moments on the basis of conventional modeling approaches. Moreover, a number of challenges for MDF particle methods in high speed flows is discussed, e.g. high cell aspect ratio grids close to solid walls, wall heat transfer, shock resolution, and problems from statistical noise which may cause artificial shock systems in supersonic flows. A Mach 2 supersonic mixing channel with multiple shock reflection and a model rocket combustor simulation demonstrate the eligibility of this technique to practical applications. Both test cases are simulated successfully for the first time with a hybrid finite-volume (FV)/Lagrangian particle solver (PS).
Numerical and analytical simulation of the production process of ZrO2 hollow particles
NASA Astrophysics Data System (ADS)
Safaei, Hadi; Emami, Mohsen Davazdah
2017-12-01
In this paper, the production process of hollow particles from the agglomerated particles is addressed analytically and numerically. The important parameters affecting this process, in particular, the initial porosity level of particles and the plasma gun types are investigated. The analytical model adopts a combination of quasi-steady thermal equilibrium and mechanical balance. In the analytical model, the possibility of a solid core existing in agglomerated particles is examined. In this model, a range of particle diameters (50μm ≤ D_{p0} ≤ 160 μ m) and various initial porosities ( 0.2 ≤ p ≤ 0.7) are considered. The numerical model employs the VOF technique for two-phase compressible flows. The production process of hollow particles from the agglomerated particles is simulated, considering an initial diameter of D_{p0} = 60 μm and initial porosity of p = 0.3, p = 0.5, and p = 0.7. Simulation results of the analytical model indicate that the solid core diameter is independent of the initial porosity, whereas the thickness of the particle shell strongly depends on the initial porosity. In both models, a hollow particle may hardly develop at small initial porosity values ( p < 0.3), while the particle disintegrates at high initial porosity values ( p > 0.6.
Kodama, Wataru; Nakasako, Masayoshi
2011-08-01
Coherent x-ray diffraction microscopy is a novel technique in the structural analyses of particles that are difficult to crystallize, such as the biological particles composing living cells. As water is indispensable for maintaining particles in functional structures, sufficient hydration of targeted particles is required during sample preparation for diffraction microscopy experiments. However, the water enveloping particles also contributes significantly to the diffraction patterns and reduces the electron-density contrast of the sample particles. In this study, we propose a protocol for the structural analyses of particles in water by applying a three-dimensional reconstruction method in real space for the projection images phase-retrieved from diffraction patterns, together with a developed density modification technique. We examined the feasibility of the protocol through three simulations involving a protein molecule in a vacuum, and enveloped in either a droplet or a cube-shaped water. The simulations were carried out for the diffraction patterns in the reciprocal planes normal to the incident x-ray beam. This assumption and the simulation conditions corresponded to experiments using x-ray wavelengths of shorter than 0.03 Å. The analyses demonstrated that our protocol provided an interpretable electron-density map. Based on the results, we discuss the advantages and limitations of the proposed protocol and its practical application for experimental data. In particular, we examined the influence of Poisson noise in diffraction patterns on the reconstructed three-dimensional electron density in the proposed protocol.
NASA Astrophysics Data System (ADS)
Paganini, Michela; de Oliveira, Luke; Nachman, Benjamin
2018-01-01
Physicists at the Large Hadron Collider (LHC) rely on detailed simulations of particle collisions to build expectations of what experimental data may look like under different theoretical modeling assumptions. Petabytes of simulated data are needed to develop analysis techniques, though they are expensive to generate using existing algorithms and computing resources. The modeling of detectors and the precise description of particle cascades as they interact with the material in the calorimeter are the most computationally demanding steps in the simulation pipeline. We therefore introduce a deep neural network-based generative model to enable high-fidelity, fast, electromagnetic calorimeter simulation. There are still challenges for achieving precision across the entire phase space, but our current solution can reproduce a variety of particle shower properties while achieving speedup factors of up to 100 000 × . This opens the door to a new era of fast simulation that could save significant computing time and disk space, while extending the reach of physics searches and precision measurements at the LHC and beyond.
NASA Astrophysics Data System (ADS)
Shaqfeh, Eric S. G.; Bernate, Jorge A.; Yang, Mengfei
2016-12-01
Within the past decade, the separation of particles via continuous flow through microfluidic devices has been developed largely through an Edisonian approach whereby devices have been developed based on observation and intuition. This is particularly true in the development of vector chromatography at vanishingly small Reynolds number for non-Brownian particles. Note that this latter phenomenon has its origins in the irreversible forces that are at work in the device, since Stokes flow reversibility typically prohibits their function otherwise. We present a numerical simulation of the vector separation of non-Brownian particles of different sizes and deformabilities in the Stokes flow through channels whose lower surface is composed of slanted cavities. The simulations are designed to understand the physical principles behind the separation as well as to provide design criteria for devices for separating particles in a given size and flexibility range. The numerical simulations are Stokes flow boundary element simulations using techniques defined elsewhere in the literature, but including a close-range repulsive force between the particles and the slanted cavities. We demonstrate that over a range of repulsive force that is comparable to the roughness in the experimental devices, the separation data (particularly in particle size) are predicted quantitatively and are a very weak function of the range of the force. We then vary the geometric parameters of the simulated devices to demonstrate the sensitivity of the separation efficiency to these parameters, thus making design predictions as to which devices are appropriate for separating particles in different size, shape, and deformability ranges.
NASA Astrophysics Data System (ADS)
van Gent, P. L.; Michaelis, D.; van Oudheusden, B. W.; Weiss, P.-É.; de Kat, R.; Laskari, A.; Jeon, Y. J.; David, L.; Schanz, D.; Huhn, F.; Gesemann, S.; Novara, M.; McPhaden, C.; Neeteson, N. J.; Rival, D. E.; Schneiders, J. F. G.; Schrijer, F. F. J.
2017-04-01
A test case for pressure field reconstruction from particle image velocimetry (PIV) and Lagrangian particle tracking (LPT) has been developed by constructing a simulated experiment from a zonal detached eddy simulation for an axisymmetric base flow at Mach 0.7. The test case comprises sequences of four subsequent particle images (representing multi-pulse data) as well as continuous time-resolved data which can realistically only be obtained for low-speed flows. Particle images were processed using tomographic PIV processing as well as the LPT algorithm `Shake-The-Box' (STB). Multiple pressure field reconstruction techniques have subsequently been applied to the PIV results (Eulerian approach, iterative least-square pseudo-tracking, Taylor's hypothesis approach, and instantaneous Vortex-in-Cell) and LPT results (FlowFit, Vortex-in-Cell-plus, Voronoi-based pressure evaluation, and iterative least-square pseudo-tracking). All methods were able to reconstruct the main features of the instantaneous pressure fields, including methods that reconstruct pressure from a single PIV velocity snapshot. Highly accurate reconstructed pressure fields could be obtained using LPT approaches in combination with more advanced techniques. In general, the use of longer series of time-resolved input data, when available, allows more accurate pressure field reconstruction. Noise in the input data typically reduces the accuracy of the reconstructed pressure fields, but none of the techniques proved to be critically sensitive to the amount of noise added in the present test case.
Simulation and phases of macroscopic particles in vortex flow
NASA Astrophysics Data System (ADS)
Rice, Heath Eric
Granular materials are an interesting class of media in that they exhibit many disparate characteristics depending on conditions. The same set of particles may behave like a solid, liquid, gas, something in-between, or something completely unique depending on the conditions. Practically speaking, granular materials are used in many aspects of manufacturing, therefore any new information gleaned about them may help refine these techniques. For example, learning of a possible instability may help avoid it in practical application, saving machinery, money, and even personnel. To that end, we intend to simulate a granular medium under tornado-like vortex airflow by varying particle parameters and observing the behaviors that arise. The simulation itself was written in Python from the ground up, starting from the basic simulation equations in Poschel [1]. From there, particle spin, viscous friction, and vertical and tangential airflow were added. The simulations were then run in batches on a local cluster computer, varying the parameters of radius, flow force, density, and friction. Phase plots were created after observing the behaviors of the simulations and the regions and borders were analyzed. Most of the results were as expected: smaller particles behaved more like a gas, larger particles behaved more like a solid, and most intermediate simulations behaved like a liquid. A small subset formed an interesting crossover region in the center, and under moderate forces began to throw a few particles at a time upward from the center in a fountain-like effect. Most borders between regions appeared to agree with analysis, following a parabolic critical rotational velocity at which the parabolic surface of the material dips to the bottom of the mass of particles. The fountain effects seemed to occur at speeds along and slightly faster than this division. [1] Please see thesis for references.
Modeling cometary photopolarimetric characteristics with Sh-matrix method
NASA Astrophysics Data System (ADS)
Kolokolova, L.; Petrov, D.
2017-12-01
Cometary dust is dominated by particles of complex shape and structure, which are often considered as fractal aggregates. Rigorous modeling of light scattering by such particles, even using parallelized codes and NASA supercomputer resources, is very computer time and memory consuming. We are presenting a new approach to modeling cometary dust that is based on the Sh-matrix technique (e.g., Petrov et al., JQSRT, 112, 2012). This method is based on the T-matrix technique (e.g., Mishchenko et al., JQSRT, 55, 1996) and was developed after it had been found that the shape-dependent factors could be separated from the size- and refractive-index-dependent factors and presented as a shape matrix, or Sh-matrix. Size and refractive index dependences are incorporated through analytical operations on the Sh-matrix to produce the elements of T-matrix. Sh-matrix method keeps all advantages of the T-matrix method, including analytical averaging over particle orientation. Moreover, the surface integrals describing the Sh-matrix elements themselves can be solvable analytically for particles of any shape. This makes Sh-matrix approach an effective technique to simulate light scattering by particles of complex shape and surface structure. In this paper, we present cometary dust as an ensemble of Gaussian random particles. The shape of these particles is described by a log-normal distribution of their radius length and direction (Muinonen, EMP, 72, 1996). Changing one of the parameters of this distribution, the correlation angle, from 0 to 90 deg., we can model a variety of particles from spheres to particles of a random complex shape. We survey the angular and spectral dependencies of intensity and polarization resulted from light scattering by such particles, studying how they depend on the particle shape, size, and composition (including porous particles to simulate aggregates) to find the best fit to the cometary observations.
Molecular simulation investigation of the nanorheology of an entangled polymer melt
NASA Astrophysics Data System (ADS)
Karim, Mir; Khare, Rajesh; Indei, Tsutomu; Schieber, Jay
2014-03-01
Knowledge of the ``local rheology'' is important for viscoelastic systems that contain significant structural and dynamic heterogeneities, such as cellular and extra-cellular crowded environments. For homogeneous viscoelastic media, a study of probe particle motion provides information on the microstructural evolution of the medium in response to the probe particle motion. Over the last two decades, probe particle rheology has emerged as a leading experimental technique for capturing local rheology of complex fluids. In recent work [M. Karim, S. C. Kohale, T. Indei, J. D. Schieber, and R. Khare, Phys. Rev. E
NASA Technical Reports Server (NTRS)
McDowell, Mark
2004-01-01
An integrated algorithm for decomposing overlapping particle images (multi-particle objects) along with determining each object s constituent particle centroid(s) has been developed using image analysis techniques. The centroid finding algorithm uses a modified eight-direction search method for finding the perimeter of any enclosed object. The centroid is calculated using the intensity-weighted center of mass of the object. The overlap decomposition algorithm further analyzes the object data and breaks it down into its constituent particle centroid(s). This is accomplished with an artificial neural network, feature based technique and provides an efficient way of decomposing overlapping particles. Combining the centroid finding and overlap decomposition routines into a single algorithm allows us to accurately predict the error associated with finding the centroid(s) of particles in our experiments. This algorithm has been tested using real, simulated, and synthetic data and the results are presented and discussed.
Fate and Transport of Nanoparticles in Porous Media: A Numerical Study
NASA Astrophysics Data System (ADS)
Taghavy, Amir
Understanding the transport characteristics of NPs in natural soil systems is essential to revealing their potential impact on the food chain and groundwater. In addition, many nanotechnology-based remedial measures require effective transport of NPs through soil, which necessitates accurate understanding of their transport and retention behavior. Based upon the conceptual knowledge of environmental behavior of NPs, mathematical models can be developed to represent the coupling of processes that govern the fate of NPs in subsurface, serving as effective tools for risk assessment and/or design of remedial strategies. This work presents an innovative hybrid Eulerian-Lagrangian modeling technique for simulating the simultaneous reactive transport of nanoparticles (NPs) and dissolved constituents in porous media. Governing mechanisms considered in the conceptual model include particle-soil grain, particle-particle, particle-dissolved constituents, and particle- oil/water interface interactions. The main advantage of this technique, compared to conventional Eulerian models, lies in its ability to address non-uniformity in physicochemical particle characteristics. The developed numerical simulator was applied to investigate the fate and transport of NPs in a number of practical problems relevant to the subsurface environment. These problems included: (1) reductive dechlorination of chlorinated solvents by zero-valent iron nanoparticles (nZVI) in dense non-aqueous phase liquid (DNAPL) source zones; (2) reactive transport of dissolving silver nanoparticles (nAg) and the dissolved silver ions; (3) particle-particle interactions and their effects on the particle-soil grain interactions; and (4) influence of particle-oil/water interface interactions on NP transport in porous media.
Particle identification using the time-over-threshold measurements in straw tube detectors
NASA Astrophysics Data System (ADS)
Jowzaee, S.; Fioravanti, E.; Gianotti, P.; Idzik, M.; Korcyl, G.; Palka, M.; Przyborowski, D.; Pysz, K.; Ritman, J.; Salabura, P.; Savrie, M.; Smyrski, J.; Strzempek, P.; Wintz, P.
2013-08-01
The identification of charged particles based on energy losses in straw tube detectors has been simulated. The response of a new front-end chip developed for the PANDA straw tube tracker was implemented in the simulations and corrections for track distance to sense wire were included. Separation power for p - K, p - π and K - π pairs obtained using the time-over-threshold technique was compared with the one based on the measurement of collected charge.
NASA Astrophysics Data System (ADS)
Kim, Yong W.
Various papers on shock waves are presented. The general topics addressed include: shock formation, focusing, and implosion; shock reflection and diffraction; turbulence; laser-produced plasmas and waves; ionization and shock-plasma interaction; chemical kinetics, pyrolysis, and soot formation; experimental facilities, techniques, and applications; ignition of detonation and combustion; particle entrainment and shock propagation through particle suspension; boundary layers and blast simulation; computational methods and numerical simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason M. Harp; Paul A. Demkowicz
2014-10-01
In the High Temperature Gas-Cooled Reactor (HTGR) the TRISO particle fuel serves as the primary fission product containment. However the large number of TRISO particles present in proposed HTGRs dictates that there will be a small fraction (~10 -4 to 10 -5) of as manufactured and in-pile particle failures that will lead to some fission product release. The matrix material surrounding the TRISO particles in fuel compacts and the structural graphite holding the TRISO particles in place can also serve as sinks for containing any released fission products. However data on the migration of solid fission products through these materialsmore » is lacking. One of the primary goals of the AGR-3/4 experiment is to study fission product migration from failed TRISO particles in prototypic HTGR components such as structural graphite and compact matrix material. In this work, the potential for a Gamma Emission Computed Tomography (GECT) technique to non-destructively examine the fission product distribution in AGR-3/4 components and other irradiation experiments is explored. Specifically, the feasibility of using the Idaho National Laboratory (INL) Hot Fuels Examination Facility (HFEF) Precision Gamma Scanner (PGS) system for this GECT application is considered. To test the feasibility, the response of the PGS system to idealized fission product distributions has been simulated using Monte Carlo radiation transport simulations. Previous work that applied similar techniques during the AGR-1 experiment will also be discussed as well as planned uses for the GECT technique during the post irradiation examination of the AGR-2 experiment. The GECT technique has also been applied to other irradiated nuclear fuel systems that were currently available in the HFEF hot cell including oxide fuel pins, metallic fuel pins, and monolithic plate fuel.« less
Simulation of electrokinetic flow in microfluidic channels
NASA Astrophysics Data System (ADS)
Sabur, Romena; Matin, M.
2005-08-01
Electrokinetic phenomena become an increasingly efficient fluid transport mechanism in micro- and nano-fluidic fields. These phenomena have also been applied successfully in microfluidic devices to achieve particle separation, pre-concentration and mixing. Electrokinetic is the flow produced by the action of an electric field on a fluid with a net charge, where the charged ions of fluid are able to drag the whole solution through the channels in the microfluidic device from one analyzing point to the other. We will present the simulation results of electrokinetic transports of fluid in various typical micro-channel geometries such as T-channel, Y-channel, cross channel and straight channel. In practice, high-speed micro-PIV technique is used to measure transient fluidic phenomena in a microfluidic channel. Particle Image Velocimetry (PIV) systems provide two- or three-dimensional velocity maps in flows using whole field techniques based on imaging the light scattered by small particles in the flow illuminated by a laser light sheet. The system generally consists of an epifluorescent microscope, CW laser and a high-speed CMOS of CCD camera. The flow of a liquid, (water for example), containing fluorescent particle is then analyzed in a counter microchannel by the highly accurate PIV method. One can then compare the simulated and experimental microfluidic flow due to electroosmotic effect.
Spatially Localized Particle Energization by Landau Damping in Current Sheets
NASA Astrophysics Data System (ADS)
Howes, G. G.; Klein, K. G.; McCubbin, A. J.
2017-12-01
Understanding the mechanisms of particle energization through the removal of energy from turbulent fluctuations in heliospheric plasmas is a grand challenge problem in heliophysics. Under the weakly collisional conditions typical of heliospheric plasma, kinetic mechanisms must be responsible for this energization, but the nature of those mechanisms remains elusive. In recent years, the spatial localization of plasma heating near current sheets in the solar wind and numerical simulations has gained much attention. Here we show, using the innovative and new field-particle correlation technique, that the spatially localized particle energization occurring in a nonlinear gyrokinetic simulation has the velocity space signature of Landau damping, suggesting that this well-known collisionless damping mechanism indeed actively leads to spatially localized heating in the vicinity of current sheets.
Hunt, J G; Watchman, C J; Bolch, W E
2007-01-01
Absorbed fraction (AF) calculations to the human skeletal tissues due to alpha particles are of interest to the internal dosimetry of occupationally exposed workers and members of the public. The transport of alpha particles through the skeletal tissue is complicated by the detailed and complex microscopic histology of the skeleton. In this study, both Monte Carlo and chord-based techniques were applied to the transport of alpha particles through 3-D microCT images of the skeletal microstructure of trabecular spongiosa. The Monte Carlo program used was 'Visual Monte Carlo--VMC'. VMC simulates the emission of the alpha particles and their subsequent energy deposition track. The second method applied to alpha transport is the chord-based technique, which randomly generates chord lengths across bone trabeculae and the marrow cavities via alternate and uniform sampling of their cumulative density functions. This paper compares the AF of energy to two radiosensitive skeletal tissues, active marrow and shallow active marrow, obtained with these two techniques.
Automatic Parameter Tuning for the Morpheus Vehicle Using Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Birge, B.
2013-01-01
A high fidelity simulation using a PC based Trick framework has been developed for Johnson Space Center's Morpheus test bed flight vehicle. There is an iterative development loop of refining and testing the hardware, refining the software, comparing the software simulation to hardware performance and adjusting either or both the hardware and the simulation to extract the best performance from the hardware as well as the most realistic representation of the hardware from the software. A Particle Swarm Optimization (PSO) based technique has been developed that increases speed and accuracy of the iterative development cycle. Parameters in software can be automatically tuned to make the simulation match real world subsystem data from test flights. Special considerations for scale, linearity, discontinuities, can be all but ignored with this technique, allowing fast turnaround both for simulation tune up to match hardware changes as well as during the test and validation phase to help identify hardware issues. Software models with insufficient control authority to match hardware test data can be immediately identified and using this technique requires very little to no specialized knowledge of optimization, freeing model developers to concentrate on spacecraft engineering. Integration of the PSO into the Morpheus development cycle will be discussed as well as a case study highlighting the tool's effectiveness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acciarri, R.; Adams, C.; An, R.
Here, we present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. Lastly, we also address technical issues that arise when applying this technique to data from a large LArTPCmore » at or near ground level.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acciarri, R.; Adams, C.; An, R.
We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at ormore » near ground level.« less
Allen, Peter B.; Milne, Graham; Doepker, Byron R.; Chiu, Daniel T.
2010-01-01
This paper describes a technique for rapidly exchanging the solution environment near a surface by displacing laminar flow fluid streams using sudden changes in applied pressure. The method employs off-chip solenoid valves to induce pressure changes, which is important in keeping the microfluidic design simple and the operation of the system robust. The performance of this technique is characterized using simulation and validated with experiments. This technique adds to the microfluidic tool box that is currently available for manipulating the solution environment around biological particles and molecules. PMID:20221560
NASA Astrophysics Data System (ADS)
Hardy, Robert; Pates, Jackie; Quinton, John
2016-04-01
The importance of developing new techniques to study soil movement cannot be underestimated especially those that integrate new technology. Currently there are limited empirical data available about the movement of individual soil particles, particularly high quality time-resolved data. Here we present a new technique which allows multiple individual soil particles to be traced in real time under simulated rainfall conditions. The technique utilises fluorescent videography in combination with a fluorescent soil tracer, which is based on natural particles. The system has been successfully used on particles greater than ~130 micrometres diameter. The technique uses HD video shot at 50 frames per second, providing extremely high temporal (0.02 s) and spatial resolution (sub-millimetre) of a particle's location without the need to perturb the system. Once the tracer has been filmed then the images are processed and analysed using a particle analysis and visualisation toolkit written in python. The toolkit enables the creation of 2 and 3-D time-resolved graphs showing the location of 1 or more particles. Quantitative numerical analysis of a pathway (or collection of pathways) is also possible, allowing parameters such as particle speed and displacement to be assessed. Filming the particles removes the need to destructively sample material and has many side-benefits, reducing the time, money and effort expended in the collection, transport and laboratory analysis of soils, while delivering data in a digital form which is perfect for modern computer-driven analysis techniques. There are many potential applications for the technique. High resolution empirical data on how soil particles move could be used to create, parameterise and evaluate soil movement models, particularly those that use the movement of individual particles. As data can be collected while rainfall is occurring it may offer the ability to study systems under dynamic conditions(rather than rainfall of a constant intensity), which are more realistic and this was one motivations behind the development of this technique.
Coding considerations for standalone molecular dynamics simulations of atomistic structures
NASA Astrophysics Data System (ADS)
Ocaya, R. O.; Terblans, J. J.
2017-10-01
The laws of Newtonian mechanics allow ab-initio molecular dynamics to model and simulate particle trajectories in material science by defining a differentiable potential function. This paper discusses some considerations for the coding of ab-initio programs for simulation on a standalone computer and illustrates the approach by C language codes in the context of embedded metallic atoms in the face-centred cubic structure. The algorithms use velocity-time integration to determine particle parameter evolution for up to several thousands of particles in a thermodynamical ensemble. Such functions are reusable and can be placed in a redistributable header library file. While there are both commercial and free packages available, their heuristic nature prevents dissection. In addition, developing own codes has the obvious advantage of teaching techniques applicable to new problems.
Visualizing turbulent mixing of gases and particles
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Smith, Philip J.; Jain, Sandeep
1995-01-01
A physical model and interactive computer graphics techniques have been developed for the visualization of the basic physical process of stochastic dispersion and mixing from steady-state CFD calculations. The mixing of massless particles and inertial particles is visualized by transforming the vector field from a traditionally Eulerian reference frame into a Lagrangian reference frame. Groups of particles are traced through the vector field for the mean path as well as their statistical dispersion about the mean position by using added scalar information about the root mean square value of the vector field and its Lagrangian time scale. In this way, clouds of particles in a turbulent environment are traced, not just mean paths. In combustion simulations of many industrial processes, good mixing is required to achieve a sufficient degree of combustion efficiency. The ability to visualize this multiphase mixing can not only help identify poor mixing but also explain the mechanism for poor mixing. The information gained from the visualization can be used to improve the overall combustion efficiency in utility boilers or propulsion devices. We have used this technique to visualize steady-state simulations of the combustion performance in several furnace designs.
NASA Astrophysics Data System (ADS)
Powis, Andrew T.; Shneider, Mikhail N.
2018-05-01
Incoherent Thomson scattering is a non-intrusive technique commonly used for measuring local plasma density. Within low-density, low-temperature plasmas and for sufficient laser intensity, the laser may perturb the local electron density via the ponderomotive force, causing the diagnostic to become intrusive and leading to erroneous results. A theoretical model for this effect is validated numerically via kinetic simulations of a quasi-neutral plasma using the particle-in-cell technique.
Balancing Particle and Mesh Computation in a Particle-In-Cell Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Worley, Patrick H; D'Azevedo, Eduardo; Hager, Robert
2016-01-01
The XGC1 plasma microturbulence particle-in-cell simulation code has both particle-based and mesh-based computational kernels that dominate performance. Both of these are subject to load imbalances that can degrade performance and that evolve during a simulation. Each separately can be addressed adequately, but optimizing just for one can introduce significant load imbalances in the other, degrading overall performance. A technique has been developed based on Golden Section Search that minimizes wallclock time given prior information on wallclock time, and on current particle distribution and mesh cost per cell, and also adapts to evolution in load imbalance in both particle and meshmore » work. In problems of interest this doubled the performance on full system runs on the XK7 at the Oak Ridge Leadership Computing Facility compared to load balancing only one of the kernels.« less
Pant, H J; Sharma, V K; Kamudu, M Vidya; Prakash, S G; Krishanamoorthy, S; Anandam, G; Rao, P Seshubabu; Ramani, N V S; Singh, Gursharan; Sonde, R R
2009-09-01
Knowledge of residence time distribution (RTD), mean residence time (MRT) and degree of axial mixing of solid phase is required for efficient operation of coal gasification process. Radiotracer technique was used to measure the RTD of coal particles in a pilot-scale fluidized bed gasifier (FBG). Two different radiotracers i.e. lanthanum-140 and gold-198 labeled coal particles (100 gm) were independently used as radiotracers. The radiotracer was instantaneously injected into the coal feed line and monitored at the ash extraction line at the bottom and gas outlet at the top of the gasifier using collimated scintillation detectors. The measured RTD data were treated and MRTs of coal/ash particles were determined. The treated data were simulated using tanks-in-series model. The simulation of RTD data indicated good degree of mixing with small fraction of the feed material bypassing/short-circuiting from the bottom of the gasifier. The results of the investigation were found useful for optimizing the design and operation of the FBG, and scale-up of the gasification process.
Holtschlag, David J.; Koschik, John A.
2004-01-01
Source areas to public water intakes on the St. Clair-Detroit River Waterway were identified by use of hydrodynamic simulation and particle-tracking analyses to help protect public supplies from contaminant spills and discharges. This report describes techniques used to identify these areas and illustrates typical results using selected points on St. Clair River and Lake St. Clair. Parameterization of an existing two-dimensional hydrodynamic model (RMA2) of the St. Clair-Detroit River Waterway was enhanced to improve estimation of local flow velocities. Improvements in simulation accuracy were achieved by computing channel roughness coefficients as a function of flow depth, and determining eddy viscosity coefficients on the basis of velocity data. The enhanced parameterization was combined with refinements in the model mesh near 13 public water intakes on the St. Clair-Detroit River Waterway to improve the resolution of flow velocities while maintaining consistency with flow and water-level data. Scenarios representing a range of likely flow and wind conditions were developed for hydrodynamic simulation. Particle-tracking analyses combined advective movements described by hydrodynamic scenarios with random components associated with sub-grid-scale movement and turbulent mixing to identify source areas to public water intakes.
Filtering in Hybrid Dynamic Bayesian Networks
NASA Technical Reports Server (NTRS)
Andersen, Morten Nonboe; Andersen, Rasmus Orum; Wheeler, Kevin
2000-01-01
We implement a 2-time slice dynamic Bayesian network (2T-DBN) framework and make a 1-D state estimation simulation, an extension of the experiment in (v.d. Merwe et al., 2000) and compare different filtering techniques. Furthermore, we demonstrate experimentally that inference in a complex hybrid DBN is possible by simulating fault detection in a watertank system, an extension of the experiment in (Koller & Lerner, 2000) using a hybrid 2T-DBN. In both experiments, we perform approximate inference using standard filtering techniques, Monte Carlo methods and combinations of these. In the watertank simulation, we also demonstrate the use of 'non-strict' Rao-Blackwellisation. We show that the unscented Kalman filter (UKF) and UKF in a particle filtering framework outperform the generic particle filter, the extended Kalman filter (EKF) and EKF in a particle filtering framework with respect to accuracy in terms of estimation RMSE and sensitivity with respect to choice of network structure. Especially we demonstrate the superiority of UKF in a PF framework when our beliefs of how data was generated are wrong. Furthermore, we investigate the influence of data noise in the watertank simulation using UKF and PFUKD and show that the algorithms are more sensitive to changes in the measurement noise level that the process noise level. Theory and implementation is based on (v.d. Merwe et al., 2000).
NASA Astrophysics Data System (ADS)
Richard, R. L.; El-Alaoui, M.; Ashour-Abdalla, M.; Walker, R. J.
2009-04-01
We have modeled the entry of solar energetic particles (SEPs) into the magnetosphere during the November 24-25, 2001 magnetic storm and the trapping of particles in the inner magnetosphere. The study used the technique of following many test particles, protons with energies greater than about 100 keV, in the electric and magnetic fields from a global magnetohydrodynamic (MHD) simulation of the magnetosphere during this storm. SEP protons formed a quasi-trapped and trapped population near and within geosynchronous orbit. Preliminary data comparisons show that the simulation does a reasonably good job of predicting the differential flux measured by geosynchronous spacecraft. Particle trapping took place mainly as a result of particles becoming non-adiabatic and crossing onto closed field lines. Particle flux in the inner magnetosphere increased dramatically as an interplanetary shock impacted and compressed the magnetosphere near 0600 UT, but long term trapping (hours) did not become widespread until about an hour later, during a further compression of the magnetosphere. Trapped and quasi-trapped particles were lost during the simulation by motion through the magnetopause and by precipitation, primarily the former. This caused the particle population near and within geosynchronous orbit to gradually decrease later on during the latter part of the interval.
NASA Astrophysics Data System (ADS)
Furuichi, Mikito; Nishiura, Daisuke
2017-10-01
We developed dynamic load-balancing algorithms for Particle Simulation Methods (PSM) involving short-range interactions, such as Smoothed Particle Hydrodynamics (SPH), Moving Particle Semi-implicit method (MPS), and Discrete Element method (DEM). These are needed to handle billions of particles modeled in large distributed-memory computer systems. Our method utilizes flexible orthogonal domain decomposition, allowing the sub-domain boundaries in the column to be different for each row. The imbalances in the execution time between parallel logical processes are treated as a nonlinear residual. Load-balancing is achieved by minimizing the residual within the framework of an iterative nonlinear solver, combined with a multigrid technique in the local smoother. Our iterative method is suitable for adjusting the sub-domain frequently by monitoring the performance of each computational process because it is computationally cheaper in terms of communication and memory costs than non-iterative methods. Numerical tests demonstrated the ability of our approach to handle workload imbalances arising from a non-uniform particle distribution, differences in particle types, or heterogeneous computer architecture which was difficult with previously proposed methods. We analyzed the parallel efficiency and scalability of our method using Earth simulator and K-computer supercomputer systems.
Small Particle Impact Damage on Different Glass Substrates
NASA Technical Reports Server (NTRS)
Waxman, R.; Guven, I.; Gray, P.
2017-01-01
Impact experiments using sand particles were performed on four distinct glass substrates. The sand particles were characterized using the X-Ray micro-CT technique; 3-D reconstruction of the particles was followed by further size and shape analyses. High-speed video footage from impact tests was used to calculate the incoming and rebound velocities of the individual sand impact events, as well as particle volume. Further, video analysis was used in conjunction with optical and scanning electron microscopy to relate the incoming velocity and shape of the particles to subsequent fractures, including both radial and lateral cracks. Analysis was performed using peridynamic simulations.
Optical trapping performance of dielectric-metallic patchy particles
Lawson, Joseph L.; Jenness, Nathan J.; Clark, Robert L.
2015-01-01
We demonstrate a series of simulation experiments examining the optical trapping behavior of composite micro-particles consisting of a small metallic patch on a spherical dielectric bead. A full parameter space of patch shapes, based on current state of the art manufacturing techniques, and optical properties of the metallic film stack is examined. Stable trapping locations and optical trap stiffness of these particles are determined based on the particle design and potential particle design optimizations are discussed. A final test is performed examining the ability to incorporate these composite particles with standard optical trap metrology technologies. PMID:26832054
Intensity-enhanced MART for tomographic PIV
NASA Astrophysics Data System (ADS)
Wang, HongPing; Gao, Qi; Wei, RunJie; Wang, JinJun
2016-05-01
A novel technique to shrink the elongated particles and suppress the ghost particles in particle reconstruction of tomographic particle image velocimetry is presented. This method, named as intensity-enhanced multiplicative algebraic reconstruction technique (IntE-MART), utilizes an inverse diffusion function and an intensity suppressing factor to improve the quality of particle reconstruction and consequently the precision of velocimetry. A numerical assessment about vortex ring motion with and without image noise is performed to evaluate the new algorithm in terms of reconstruction, particle elongation and velocimetry. The simulation is performed at seven different seeding densities. The comparison of spatial filter MART and IntE-MART on the probability density function of particle peak intensity suggests that one of the local minima of the distribution can be used to separate the ghosts and actual particles. Thus, ghost removal based on IntE-MART is also introduced. To verify the application of IntE-MART, a real plate turbulent boundary layer experiment is performed. The result indicates that ghost reduction can increase the accuracy of RMS of velocity field.
Computational techniques for flows with finite-rate condensation
NASA Technical Reports Server (NTRS)
Candler, Graham V.
1993-01-01
A computational method to simulate the inviscid two-dimensional flow of a two-phase fluid was developed. This computational technique treats the gas phase and each of a prescribed number of particle sizes as separate fluids which are allowed to interact with one another. Thus, each particle-size class is allowed to move through the fluid at its own velocity at each point in the flow field. Mass, momentum, and energy are exchanged between each particle class and the gas phase. It is assumed that the particles do not collide with one another, so that there is no inter-particle exchange of momentum and energy. However, the particles are allowed to grow, and therefore, they may change from one size class to another. Appropriate rates of mass, momentum, and energy exchange between the gas and particle phases and between the different particle classes were developed. A numerical method was developed for use with this equation set. Several test cases were computed and show qualitative agreement with previous calculations.
NASA Astrophysics Data System (ADS)
Joglekar, Prasad; Shastry, K.; Satyal, Suman; Weiss, Alexander
2012-02-01
Time of flight Positron Annihilation Induced Auger Electron Spectroscopy system, a highly surface selective analytical technique using time of flight of auger electron resulting from the annihilation of core electrons by trapped incident positron in image potential well. We simulated and modeled the trajectories of the charge particles in TOF-PAES using SIMION for the development of new high resolution system at U T Arlington and current TOFPAES system. This poster presents the SIMION simulations results, Time of flight calculations and larmor radius calculations for current system as well as new system.
Charged-particle emission tomography
Ding, Yijun; Caucci, Luca; Barrett, Harrison H.
2018-01-01
Purpose Conventional charged-particle imaging techniques —such as autoradiography —provide only two-dimensional (2D) black ex vivo images of thin tissue slices. In order to get volumetric information, images of multiple thin slices are stacked. This process is time consuming and prone to distortions, as registration of 2D images is required. We propose a direct three-dimensional (3D) autoradiography technique, which we call charged-particle emission tomography (CPET). This 3D imaging technique enables imaging of thick tissue sections, thus increasing laboratory throughput and eliminating distortions due to registration. CPET also has the potential to enable in vivo charged-particle imaging with a window chamber or an endoscope. Methods Our approach to charged-particle emission tomography uses particle-processing detectors (PPDs) to estimate attributes of each detected particle. The attributes we estimate include location, direction of propagation, and/or the energy deposited in the detector. Estimated attributes are then fed into a reconstruction algorithm to reconstruct the 3D distribution of charged-particle-emitting radionuclides. Several setups to realize PPDs are designed. Reconstruction algorithms for CPET are developed. Results Reconstruction results from simulated data showed that a PPD enables CPET if the PPD measures more attributes than just the position from each detected particle. Experiments showed that a two-foil charged-particle detector is able to measure the position and direction of incident alpha particles. Conclusions We proposed a new volumetric imaging technique for charged-particle-emitting radionuclides, which we have called charged-particle emission tomography (CPET). We also proposed a new class of charged-particle detectors, which we have called particle-processing detectors (PPDs). When a PPD is used to measure the direction and/or energy attributes along with the position attributes, CPET is feasible. PMID:28370094
Charged-particle emission tomography.
Ding, Yijun; Caucci, Luca; Barrett, Harrison H
2017-06-01
Conventional charged-particle imaging techniques - such as autoradiography - provide only two-dimensional (2D) black ex vivo images of thin tissue slices. In order to get volumetric information, images of multiple thin slices are stacked. This process is time consuming and prone to distortions, as registration of 2D images is required. We propose a direct three-dimensional (3D) autoradiography technique, which we call charged-particle emission tomography (CPET). This 3D imaging technique enables imaging of thick tissue sections, thus increasing laboratory throughput and eliminating distortions due to registration. CPET also has the potential to enable in vivo charged-particle imaging with a window chamber or an endoscope. Our approach to charged-particle emission tomography uses particle-processing detectors (PPDs) to estimate attributes of each detected particle. The attributes we estimate include location, direction of propagation, and/or the energy deposited in the detector. Estimated attributes are then fed into a reconstruction algorithm to reconstruct the 3D distribution of charged-particle-emitting radionuclides. Several setups to realize PPDs are designed. Reconstruction algorithms for CPET are developed. Reconstruction results from simulated data showed that a PPD enables CPET if the PPD measures more attributes than just the position from each detected particle. Experiments showed that a two-foil charged-particle detector is able to measure the position and direction of incident alpha particles. We proposed a new volumetric imaging technique for charged-particle-emitting radionuclides, which we have called charged-particle emission tomography (CPET). We also proposed a new class of charged-particle detectors, which we have called particle-processing detectors (PPDs). When a PPD is used to measure the direction and/or energy attributes along with the position attributes, CPET is feasible. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Vectorization of a particle simulation method for hypersonic rarefied flow
NASA Technical Reports Server (NTRS)
Mcdonald, Jeffrey D.; Baganoff, Donald
1988-01-01
An efficient particle simulation technique for hypersonic rarefied flows is presented at an algorithmic and implementation level. The implementation is for a vector computer architecture, specifically the Cray-2. The method models an ideal diatomic Maxwell molecule with three translational and two rotational degrees of freedom. Algorithms are designed specifically for compatibility with fine grain parallelism by reducing the number of data dependencies in the computation. By insisting on this compatibility, the method is capable of performing simulation on a much larger scale than previously possible. A two-dimensional simulation of supersonic flow over a wedge is carried out for the near-continuum limit where the gas is in equilibrium and the ideal solution can be used as a check on the accuracy of the gas model employed in the method. Also, a three-dimensional, Mach 8, rarefied flow about a finite-span flat plate at a 45 degree angle of attack was simulated. It utilized over 10 to the 7th particles carried through 400 discrete time steps in less than one hour of Cray-2 CPU time. This problem was chosen to exhibit the capability of the method in handling a large number of particles and a true three-dimensional geometry.
Computer Simulation of Fracture in Aerogels
NASA Technical Reports Server (NTRS)
Good, Brian S.
2006-01-01
Aerogels are of interest to the aerospace community primarily for their thermal properties, notably their low thermal conductivities. While the gels are typically fragile, recent advances in the application of conformal polymer layers to these gels has made them potentially useful as lightweight structural materials as well. In this work, we investigate the strength and fracture behavior of silica aerogels using a molecular statics-based computer simulation technique. The gels' structure is simulated via a Diffusion Limited Cluster Aggregation (DLCA) algorithm, which produces fractal structures representing experimentally observed aggregates of so-called secondary particles, themselves composed of amorphous silica primary particles an order of magnitude smaller. We have performed multi-length-scale simulations of fracture in silica aerogels, in which the interaction b e e n two secondary particles is assumed to be described by a Morse pair potential parameterized such that the potential range is much smaller than the secondary particle size. These Morse parameters are obtained by atomistic simulation of models of the experimentally-observed amorphous silica "bridges," with the fracture behavior of these bridges modeled via molecular statics using a Morse/Coulomb potential for silica. We consider the energetics of the fracture, and compare qualitative features of low-and high-density gel fracture.
Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber
NASA Astrophysics Data System (ADS)
Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.; Bagby, L.; Baller, B.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Bugel, L.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; James, C.; de Vries, J. Jan; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Jones, B. J. P.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; von Rohr, C. Rudolf; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van de Water, R. G.; Viren, B.; Weber, M.; Weston, J.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Zeller, G. P.; Zennamo, J.; Zhang, C.
2017-03-01
We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.
A numerical method for shock driven multiphase flow with evaporating particles
NASA Astrophysics Data System (ADS)
Dahal, Jeevan; McFarland, Jacob A.
2017-09-01
A numerical method for predicting the interaction of active, phase changing particles in a shock driven flow is presented in this paper. The Particle-in-Cell (PIC) technique was used to couple particles in a Lagrangian coordinate system with a fluid in an Eulerian coordinate system. The Piecewise Parabolic Method (PPM) hydrodynamics solver was used for solving the conservation equations and was modified with mass, momentum, and energy source terms from the particle phase. The method was implemented in the open source hydrodynamics software FLASH, developed at the University of Chicago. A simple validation of the methods is accomplished by comparing velocity and temperature histories from a single particle simulation with the analytical solution. Furthermore, simple single particle parcel simulations were run at two different sizes to study the effect of particle size on vorticity deposition in a shock-driven multiphase instability. Large particles were found to have lower enstrophy production at early times and higher enstrophy dissipation at late times due to the advection of the particle vorticity source term through the carrier gas. A 2D shock-driven instability of a circular perturbation is studied in simulations and compared to previous experimental data as further validation of the numerical methods. The effect of the particle size distribution and particle evaporation is examined further for this case. The results show that larger particles reduce the vorticity deposition, while particle evaporation increases it. It is also shown that for a distribution of particles sizes the vorticity deposition is decreased compared to single particle size case at the mean diameter.
NASA Technical Reports Server (NTRS)
Jordan, F. L., Jr.
1980-01-01
As part of basic research to improve aerial applications technology, methods were developed at the Langley Vortex Research Facility to simulate and measure deposition patterns of aerially-applied sprays and granular materials by means of tests with small-scale models of agricultural aircraft and dynamically-scaled test particles. Interactions between the aircraft wake and the dispersed particles are being studied with the objective of modifying wake characteristics and dispersal techniques to increase swath width, improve deposition pattern uniformity, and minimize drift. The particle scaling analysis, test methods for particle dispersal from the model aircraft, visualization of particle trajectories, and measurement and computer analysis of test deposition patterns are described. An experimental validation of the scaling analysis and test results that indicate improved control of chemical drift by use of winglets are presented to demonstrate test methods.
Efficient high-quality volume rendering of SPH data.
Fraedrich, Roland; Auer, Stefan; Westermann, Rüdiger
2010-01-01
High quality volume rendering of SPH data requires a complex order-dependent resampling of particle quantities along the view rays. In this paper we present an efficient approach to perform this task using a novel view-space discretization of the simulation domain. Our method draws upon recent work on GPU-based particle voxelization for the efficient resampling of particles into uniform grids. We propose a new technique that leverages a perspective grid to adaptively discretize the view-volume, giving rise to a continuous level-of-detail sampling structure and reducing memory requirements compared to a uniform grid. In combination with a level-of-detail representation of the particle set, the perspective grid allows effectively reducing the amount of primitives to be processed at run-time. We demonstrate the quality and performance of our method for the rendering of fluid and gas dynamics SPH simulations consisting of many millions of particles.
A new approach to simulating collisionless dark matter fluids
NASA Astrophysics Data System (ADS)
Hahn, Oliver; Abel, Tom; Kaehler, Ralf
2013-09-01
Recently, we have shown how current cosmological N-body codes already follow the fine grained phase-space information of the dark matter fluid. Using a tetrahedral tessellation of the three-dimensional manifold that describes perfectly cold fluids in six-dimensional phase space, the phase-space distribution function can be followed throughout the simulation. This allows one to project the distribution function into configuration space to obtain highly accurate densities, velocities and velocity dispersions. Here, we exploit this technique to show first steps on how to devise an improved particle-mesh technique. At its heart, the new method thus relies on a piecewise linear approximation of the phase-space distribution function rather than the usual particle discretization. We use pseudo-particles that approximate the masses of the tetrahedral cells up to quadrupolar order as the locations for cloud-in-cell (CIC) deposit instead of the particle locations themselves as in standard CIC deposit. We demonstrate that this modification already gives much improved stability and more accurate dynamics of the collisionless dark matter fluid at high force and low mass resolution. We demonstrate the validity and advantages of this method with various test problems as well as hot/warm dark matter simulations which have been known to exhibit artificial fragmentation. This completely unphysical behaviour is much reduced in the new approach. The current limitations of our approach are discussed in detail and future improvements are outlined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou Yu, E-mail: yzou@Princeton.ED; Kavousanakis, Michail E., E-mail: mkavousa@Princeton.ED; Kevrekidis, Ioannis G., E-mail: yannis@Princeton.ED
2010-07-20
The study of particle coagulation and sintering processes is important in a variety of research studies ranging from cell fusion and dust motion to aerosol formation applications. These processes are traditionally simulated using either Monte-Carlo methods or integro-differential equations for particle number density functions. In this paper, we present a computational technique for cases where we believe that accurate closed evolution equations for a finite number of moments of the density function exist in principle, but are not explicitly available. The so-called equation-free computational framework is then employed to numerically obtain the solution of these unavailable closed moment equations bymore » exploiting (through intelligent design of computational experiments) the corresponding fine-scale (here, Monte-Carlo) simulation. We illustrate the use of this method by accelerating the computation of evolving moments of uni- and bivariate particle coagulation and sintering through short simulation bursts of a constant-number Monte-Carlo scheme.« less
Tracking Simulation of Third-Integer Resonant Extraction for Fermilab's Mu2e Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Chong Shik; Amundson, James; Michelotti, Leo
2015-02-13
The Mu2e experiment at Fermilab requires acceleration and transport of intense proton beams in order to deliver stable, uniform particle spills to the production target. To meet the experimental requirement, particles will be extracted slowly from the Delivery Ring to the external beamline. Using Synergia2, we have performed multi-particle tracking simulations of third-integer resonant extraction in the Delivery Ring, including space charge effects, physical beamline elements, and apertures. A piecewise linear ramp profile of tune quadrupoles was used to maintain a constant averaged spill rate throughout extraction. To study and minimize beam losses, we implemented and introduced a number ofmore » features, beamline element apertures, and septum plane alignments. Additionally, the RF Knockout (RFKO) technique, which excites particles transversely, is employed for spill regulation. Combined with a feedback system, it assists in fine-tuning spill uniformity. Simulation studies were carried out to optimize the RFKO feedback scheme, which will be helpful in designing the final spill regulation system.« less
An image filtering technique for SPIDER visible tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fonnesu, N., E-mail: nicola.fonnesu@igi.cnr.it; Agostini, M.; Brombin, M.
2014-02-15
The tomographic diagnostic developed for the beam generated in the SPIDER facility (100 keV, 50 A prototype negative ion source of ITER neutral beam injector) will characterize the two-dimensional particle density distribution of the beam. The simulations described in the paper show that instrumental noise has a large influence on the maximum achievable resolution of the diagnostic. To reduce its impact on beam pattern reconstruction, a filtering technique has been adapted and implemented in the tomography code. This technique is applied to the simulated tomographic reconstruction of the SPIDER beam, and the main results are reported.
Monte Carlo Simulation of Nonlinear Radiation Induced Plasmas. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Wang, B. S.
1972-01-01
A Monte Carlo simulation model for radiation induced plasmas with nonlinear properties due to recombination was, employing a piecewise linearized predict-correct iterative technique. Several important variance reduction techniques were developed and incorporated into the model, including an antithetic variates technique. This approach is especially efficient for plasma systems with inhomogeneous media, multidimensions, and irregular boundaries. The Monte Carlo code developed has been applied to the determination of the electron energy distribution function and related parameters for a noble gas plasma created by alpha-particle irradiation. The characteristics of the radiation induced plasma involved are given.
Zhao, Tong; Liu, Kai; Takei, Masahiro
2016-01-01
The inertial migration of neutrally buoyant spherical particles in high particle concentration (αpi > 3%) suspension flow in a square microchannel was investigated by means of the multi-electrodes sensing method which broke through the limitation of conventional optical measurement techniques in the high particle concentration suspensions due to interference from the large particle numbers. Based on the measured particle concentrations near the wall and at the corner of the square microchannel, particle cross-sectional migration ratios are calculated to quantitatively estimate the migration degree. As a result, particle migration to four stable equilibrium positions near the centre of each face of the square microchannel is found only in the cases of low initial particle concentration up to 5.0 v/v%, while the migration phenomenon becomes partial as the initial particle concentration achieves 10.0 v/v% and disappears in the cases of the initial particle concentration αpi ≥ 15%. In order to clarify the influential mechanism of particle-particle interaction on particle migration, an Eulerian-Lagrangian numerical model was proposed by employing the Lennard-Jones potential as the inter-particle potential, while the inertial lift coefficient is calculated by a pre-processed semi-analytical simulation. Moreover, based on the experimental and simulation results, a dimensionless number named migration index was proposed to evaluate the influence of the initial particle concentration on the particle migration phenomenon. The migration index less than 0.1 is found to denote obvious particle inertial migration, while a larger migration index denotes the absence of it. This index is helpful for estimation of the maximum initial particle concentration for the design of inertial microfluidic devices. PMID:27158288
Kassiopeia: a modern, extensible C++ particle tracking package
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furse, Daniel; Groh, Stefan; Trost, Nikolaus
The Kassiopeia particle tracking framework is an object-oriented software package using modern C++ techniques, written originally to meet the needs of the KATRIN collaboration. Kassiopeia features a new algorithmic paradigm for particle tracking simulations which targets experiments containing complex geometries and electromagnetic fields, with high priority put on calculation efficiency, customizability, extensibility, and ease-of-use for novice programmers. To solve Kassiopeia's target physics problem the software is capable of simulating particle trajectories governed by arbitrarily complex differential equations of motion, continuous physics processes that may in part be modeled as terms perturbing that equation of motion, stochastic processes that occur inmore » flight such as bulk scattering and decay, and stochastic surface processes occurring at interfaces, including transmission and reflection effects. This entire set of computations takes place against the backdrop of a rich geometry package which serves a variety of roles, including initialization of electromagnetic field simulations and the support of state-dependent algorithm-swapping and behavioral changes as a particle's state evolves. Thanks to the very general approach taken by Kassiopeia it can be used by other experiments facing similar challenges when calculating particle trajectories in electromagnetic fields. It is publicly available at https://github.com/KATRIN-Experiment/Kassiopeia.« less
Kassiopeia: a modern, extensible C++ particle tracking package
Furse, Daniel; Groh, Stefan; Trost, Nikolaus; ...
2017-05-16
The Kassiopeia particle tracking framework is an object-oriented software package using modern C++ techniques, written originally to meet the needs of the KATRIN collaboration. Kassiopeia features a new algorithmic paradigm for particle tracking simulations which targets experiments containing complex geometries and electromagnetic fields, with high priority put on calculation efficiency, customizability, extensibility, and ease-of-use for novice programmers. To solve Kassiopeia's target physics problem the software is capable of simulating particle trajectories governed by arbitrarily complex differential equations of motion, continuous physics processes that may in part be modeled as terms perturbing that equation of motion, stochastic processes that occur inmore » flight such as bulk scattering and decay, and stochastic surface processes occurring at interfaces, including transmission and reflection effects. This entire set of computations takes place against the backdrop of a rich geometry package which serves a variety of roles, including initialization of electromagnetic field simulations and the support of state-dependent algorithm-swapping and behavioral changes as a particle's state evolves. Thanks to the very general approach taken by Kassiopeia it can be used by other experiments facing similar challenges when calculating particle trajectories in electromagnetic fields. It is publicly available at https://github.com/KATRIN-Experiment/Kassiopeia.« less
NASA Astrophysics Data System (ADS)
Akashi-Ronquest, M.; Amaudruz, P.-A.; Batygov, M.; Beltran, B.; Bodmer, M.; Boulay, M. G.; Broerman, B.; Buck, B.; Butcher, A.; Cai, B.; Caldwell, T.; Chen, M.; Chen, Y.; Cleveland, B.; Coakley, K.; Dering, K.; Duncan, F. A.; Formaggio, J. A.; Gagnon, R.; Gastler, D.; Giuliani, F.; Gold, M.; Golovko, V. V.; Gorel, P.; Graham, K.; Grace, E.; Guerrero, N.; Guiseppe, V.; Hallin, A. L.; Harvey, P.; Hearns, C.; Henning, R.; Hime, A.; Hofgartner, J.; Jaditz, S.; Jillings, C. J.; Kachulis, C.; Kearns, E.; Kelsey, J.; Klein, J. R.; Kuźniak, M.; LaTorre, A.; Lawson, I.; Li, O.; Lidgard, J. J.; Liimatainen, P.; Linden, S.; McFarlane, K.; McKinsey, D. N.; MacMullin, S.; Mastbaum, A.; Mathew, R.; McDonald, A. B.; Mei, D.-M.; Monroe, J.; Muir, A.; Nantais, C.; Nicolics, K.; Nikkel, J. A.; Noble, T.; O'Dwyer, E.; Olsen, K.; Orebi Gann, G. D.; Ouellet, C.; Palladino, K.; Pasuthip, P.; Perumpilly, G.; Pollmann, T.; Rau, P.; Retière, F.; Rielage, K.; Schnee, R.; Seibert, S.; Skensved, P.; Sonley, T.; Vázquez-Jáuregui, E.; Veloce, L.; Walding, J.; Wang, B.; Wang, J.; Ward, M.; Zhang, C.
2015-05-01
Many current and future dark matter and neutrino detectors are designed to measure scintillation light with a large array of photomultiplier tubes (PMTs). The energy resolution and particle identification capabilities of these detectors depend in part on the ability to accurately identify individual photoelectrons in PMT waveforms despite large variability in pulse amplitudes and pulse pileup. We describe a Bayesian technique that can identify the times of individual photoelectrons in a sampled PMT waveform without deconvolution, even when pileup is present. To demonstrate the technique, we apply it to the general problem of particle identification in single-phase liquid argon dark matter detectors. Using the output of the Bayesian photoelectron counting algorithm described in this paper, we construct several test statistics for rejection of backgrounds for dark matter searches in argon. Compared to simpler methods based on either observed charge or peak finding, the photoelectron counting technique improves both energy resolution and particle identification of low energy events in calibration data from the DEAP-1 detector and simulation of the larger MiniCLEAN dark matter detector.
Streaming current for particle-covered surfaces: simulations and experiments
NASA Astrophysics Data System (ADS)
Blawzdziewicz, Jerzy; Adamczyk, Zbigniew; Ekiel-Jezewska, Maria L.
2017-11-01
Developing in situ methods for assessment of surface coverage by adsorbed nanoparticles is crucial for numerous technological processes, including controlling protein deposition and fabricating diverse microstructured materials (e.g., antibacterial coatings, catalytic surfaces, and particle-based optical systems). For charged surfaces and particles, promising techniques for evaluating surface coverage are based on measurements of the electrokinetic streaming current associated with ion convection in the double-layer region. We have investigated the dependence of the streaming current on the area fraction of adsorbed particles for equilibrium and random-sequential-adsorption (RSA) distributions of spherical particles, and for periodic square and hexagonal sphere arrays. The RSA results have been verified experimentally. Our numerical results indicate that the streaming current weakly depends on the microstructure of the particle monolayer. Combining simulations with the virial expansion, we provide convenient fitting formulas for the particle and surface contributions to the streaming current as functions of area fractions. For particles that have the same ζ-potential as the surface, we find that surface roughness reduces the streaming current. Supported by NSF Award No. 1603627.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Womersley, J.; DiGiacomo, N.; Killian, K.
1990-04-01
Detailed detector design has traditionally been divided between engineering optimization for structural integrity and subsequent physicist evaluation. The availability of CAD systems for engineering design enables the tasks to be integrated by providing tools for particle simulation within the CAD system. We believe this will speed up detector design and avoid problems due to the late discovery of shortcomings in the detector. This could occur because of the slowness of traditional verification techniques (such as detailed simulation with GEANT). One such new particle simulation tool is described. It is being used with the I-DEAS CAD package for SSC detector designmore » at Martin-Marietta Astronautics and is to be released through the SSC Laboratory.« less
Developing a new controllable lunar dust simulant: BHLD20
NASA Astrophysics Data System (ADS)
Sun, Hao; Yi, Min; Shen, Zhigang; Zhang, Xiaojing; Ma, Shulin
2017-07-01
Identifying and eliminating the negative effects of lunar dust are of great importance for future lunar exploration. Since the available lunar samples are limited, developing terrestrial lunar dust simulant becomes critical for the study of lunar dust problem. In this work, beyond the three existing lunar dust simulants: JSC-1Avf, NU-LHT-1D, and CLDS-i, we developed a new high-fidelity lunar dust simulant named as BHLD20. And we concluded a methodology that soil and dust simulants can be produced by variations in portions of the overall procedure, whereby the properties of the products can be controlled by adjusting the feedstock preparation and heating process. The key ingredients of our innovative preparation route include: (1) plagioclase, used as a major material in preparing all kinds of lunar dust simulants; (2) a muffle furnace, applied to expediently enrich the glass phase in feedstock, with the production of some composite particles; (3) a one-step sand-milling technique, employed for mass pulverization without wasting feedstock; and (4) a particle dispersant, utilized to prevent the agglomeration in lunar dust simulant and retain the real particle size. Research activities in the development of BHLD20 can help solve the lunar dust problem.
Mixing, segregation, and flow of granular materials
NASA Astrophysics Data System (ADS)
McCarthy, Joseph J.
1998-11-01
This dissertation addresses mixing, segregation, and flow of granular materials with the ultimate goal of providing fundamental understanding and tools for the rational design and optimization of mixing devices. In particular, the paradigm cases of a slowly rotated tumbler mixer and flow down an inclined plane are examined. Computational work, as well as supporting experiments, are used to probe both two and three dimensional systems. In the avalanching regime, the mixing and flow can be viewed either on a global-scale or a local-scale. On the global-scale, material is transported via avalanches whose gross motion can be well described by geometrical considerations. On the local-scale, the dynamics of the particle motion becomes important; particles follow complicated trajectories that are highly sensitive to differences in size/density/morphology. By decomposing the problem in this way, it is possible to study the implications of the geometry and dynamics separately and to add complexities in a controlled fashion. This methodology allows even seemingly difficult problems (i.e., mixing in non-convex geometries, and mixing of dissimilar particles) to be probed in a simple yet methodical way. In addition this technique provides predictions of optimal mixing conditions in an avalanching tumbler, a criterion for evaluating the effect of mixer shape, and mixing enhancement strategies for both two and three dimensional mixers. In the continuous regime, the flow can be divided into two regions: a rapid flow region of the cascading layer at the free surface, and a fixed bed region undergoing solid body rotation. A continuum-based description, in which averages are taken across the layer, generates quantitative predictions about the flow in the cascading layer and agrees well with experiment. Incorporating mixing through a diffusive flux (as well as constitutive expression for segregation) within the cascading layer allows for the determination of optimal mixing conditions. Segregation requires a detailed understanding of the interplay between the flow and the properties of the particles. A relatively mature simulation technique, particle dynamics (PD), aptly captures these effects and is eminently suited to mixing studies; particle properties can be varied on a particle-by-particle basis and detailed mixed structures are easily captured and visualized. However, PD is computationally intensive and is therefore of questionable general utility. By combining PD and geometrical insight-in essence, by focusing the particle dynamics simulation only where it is needed-a new hybrid method of simulation, which is much faster than a conventional particle dynamics method, can be achieved. This technique can yield more than an order of magnitude increase in computational speed while maintaining the versatility of a particle dynamics simulation. Alternatively, by utilizing PD to explore segregation mechanisms in simple flows-e.g., flow down an inclined plane-heuristic models and constitutive relations for segregation can be tested. Incorporating these segregation flux terms into a continuum description of the flow in a tumbler allows rapid Lagrangian simulation of the competition between mixing and segregation. For the case of density segregation, this produces good agreement between theory and experiment with essentially no adjustable parameters. In addition, an accurate quantitative prediction of the optimal mixing time is obtained.
Particle Identification in Nuclear Emulsion by Measuring Multiple Coulomb Scattering
NASA Astrophysics Data System (ADS)
Than Tint, Khin; Nakazawa, Kazuma; Yoshida, Junya; Kyaw Soe, Myint; Mishina, Akihiro; Kinbara, Shinji; Itoh, Hiroki; Endo, Yoko; Kobayashi, Hidetaka; E07 Collaboration
2014-09-01
We are developing particle identification techniques for single charged particles such as Xi, proton, K and π by measuring multiple Coulomb scattering in nuclear emulsion. Nuclear emulsion is the best three dimensional detector for double strangeness (S = -2) nuclear system. We expect to accumulate about 10000 Xi-minus stop events which produce double lambda hypernucleus in J-PARC E07 emulsion counter hybrid experiment. The purpose of this particle identification (PID) in nuclear emulsion is to purify Xi-minus stop events which gives information about production probability of double hypernucleus and branching ratio of decay mode. Amount of scattering parameterized as angular distribution and second difference is inversely proportional to the momentum of particle. We produced several thousands of various charged particle tracks in nuclear emulsion stack via Geant4 simulation. In this talk, PID with some measuring methods for multiple scattering will be discussed by comparing with simulation data and real Xi-minus stop events in KEK-E373 experiment.
Cuetos, Alejandro; Patti, Alessandro
2015-08-01
We propose a simple but powerful theoretical framework to quantitatively compare Brownian dynamics (BD) and dynamic Monte Carlo (DMC) simulations of multicomponent colloidal suspensions. By extending our previous study focusing on monodisperse systems of rodlike colloids, here we generalize the formalism described there to multicomponent colloidal mixtures and validate it by investigating the dynamics in isotropic and liquid crystalline phases containing spherical and rodlike particles. In order to investigate the dynamics of multicomponent colloidal systems by DMC simulations, it is key to determine the elementary time step of each species and establish a unique timescale. This is crucial to consistently study the dynamics of colloidal particles with different geometry. By analyzing the mean-square displacement, the orientation autocorrelation functions, and the self part of the van Hove correlation functions, we show that DMC simulation is a very convenient and reliable technique to describe the stochastic dynamics of any multicomponent colloidal system. Our theoretical formalism can be easily extended to any colloidal system containing size and/or shape polydisperse particles.
NASA Astrophysics Data System (ADS)
Assous, Franck; Chaskalovic, Joël
2011-06-01
We propose a new approach that consists in using data mining techniques for scientific computing. Indeed, data mining has proved to be efficient in other contexts which deal with huge data like in biology, medicine, marketing, advertising and communications. Our aim, here, is to deal with the important problem of the exploitation of the results produced by any numerical method. Indeed, more and more data are created today by numerical simulations. Thus, it seems necessary to look at efficient tools to analyze them. In this work, we focus our presentation to a test case dedicated to an asymptotic paraxial approximation to model ultrarelativistic particles. Our method directly deals with numerical results of simulations and try to understand what each order of the asymptotic expansion brings to the simulation results over what could be obtained by other lower-order or less accurate means. This new heuristic approach offers new potential applications to treat numerical solutions to mathematical models.
Sato, Tatsuhiko; Kase, Yuki; Watanabe, Ritsuko; Niita, Koji; Sihver, Lembit
2009-01-01
Microdosimetric quantities such as lineal energy, y, are better indexes for expressing the RBE of HZE particles in comparison to LET. However, the use of microdosimetric quantities in computational dosimetry is severely limited because of the difficulty in calculating their probability densities in macroscopic matter. We therefore improved the particle transport simulation code PHITS, providing it with the capability of estimating the microdosimetric probability densities in a macroscopic framework by incorporating a mathematical function that can instantaneously calculate the probability densities around the trajectory of HZE particles with a precision equivalent to that of a microscopic track-structure simulation. A new method for estimating biological dose, the product of physical dose and RBE, from charged-particle therapy was established using the improved PHITS coupled with a microdosimetric kinetic model. The accuracy of the biological dose estimated by this method was tested by comparing the calculated physical doses and RBE values with the corresponding data measured in a slab phantom irradiated with several kinds of HZE particles. The simulation technique established in this study will help to optimize the treatment planning of charged-particle therapy, thereby maximizing the therapeutic effect on tumors while minimizing unintended harmful effects on surrounding normal tissues.
Hydraulic fracturing - an attempt of DEM simulation
NASA Astrophysics Data System (ADS)
Kosmala, Alicja; Foltyn, Natalia; Klejment, Piotr; Dębski, Wojciech
2017-04-01
Hydraulic fracturing is a technique widely used in oil, gas and unconventional reservoirs exploitation in order to enable the oil/gas to flow more easily and enhance the production. It relays on pumping into a rock a special fluid under a high pressure which creates a set of microcracks which enhance porosity of the reservoir rock. In this research, attempt of simulation of such hydrofracturing process using the Discrete Element Method approach is presented. The basic assumption of this approach is that the rock can be represented as an assembly of discrete particles cemented into a rigid sample (Potyondy 2004). An existence of voids among particles simulates then a pore system which can be filled out by fracturing fluid, numerically represented by much smaller particles. Following this microscopic point of view and its numerical representation by DEM method we present primary results of numerical analysis of hydrofracturing phenomena, using the ESyS-Particle Software. In particular, we consider what is happening in distinct vicinity of the border between rock sample and fracking particles, how cracks are creating and evolving by breaking bonds between particles, how acoustic/seismic energy is releasing and so on. D.O. Potyondy, P.A. Cundall. A bonded-particle model for rock. International Journal of Rock Mechanics and Mining Sciences, 41 (2004), pp. 1329-1364.
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Ding, Junfei; New, T. H.; Soria, Julio
2017-07-01
This paper presents a dense ray tracing reconstruction technique for a single light-field camera-based particle image velocimetry. The new approach pre-determines the location of a particle through inverse dense ray tracing and reconstructs the voxel value using multiplicative algebraic reconstruction technique (MART). Simulation studies were undertaken to identify the effects of iteration number, relaxation factor, particle density, voxel-pixel ratio and the effect of the velocity gradient on the performance of the proposed dense ray tracing-based MART method (DRT-MART). The results demonstrate that the DRT-MART method achieves higher reconstruction resolution at significantly better computational efficiency than the MART method (4-50 times faster). Both DRT-MART and MART approaches were applied to measure the velocity field of a low speed jet flow which revealed that for the same computational cost, the DRT-MART method accurately resolves the jet velocity field with improved precision, especially for the velocity component along the depth direction.
Efficient Schmidt number scaling in dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Krafnick, Ryan C.; García, Angel E.
2015-12-01
Dissipative particle dynamics is a widely used mesoscale technique for the simulation of hydrodynamics (as well as immersed particles) utilizing coarse-grained molecular dynamics. While the method is capable of describing any fluid, the typical choice of the friction coefficient γ and dissipative force cutoff rc yields an unacceptably low Schmidt number Sc for the simulation of liquid water at standard temperature and pressure. There are a variety of ways to raise Sc, such as increasing γ and rc, but the relative cost of modifying each parameter (and the concomitant impact on numerical accuracy) has heretofore remained undetermined. We perform a detailed search over the parameter space, identifying the optimal strategy for the efficient and accuracy-preserving scaling of Sc, using both numerical simulations and theoretical predictions. The composite results recommend a parameter choice that leads to a speed improvement of a factor of three versus previously utilized strategies.
Reilly, Anthony M; Briesen, Heiko
2012-01-21
The feasibility of using the molecular dynamics (MD) simulation technique to study crystal growth from solution quantitatively, as well as to obtain transition rate constants, has been studied. The dynamics of an interface between a solution of Lennard-Jones particles and the (100) face of an fcc lattice comprised of solute particles have been studied using MD simulations, showing that MD is, in principle, capable of following growth behavior over large supersaturation and temperature ranges. Using transition state theory, and a nearest-neighbor approximation growth and dissolution rate constants have been extracted from equilibrium MD simulations at a variety of temperatures. The temperature dependence of the rates agrees well with the expected transition state theory behavior. © 2012 American Institute of Physics
Partnership for Edge Physics (EPSI), University of Texas Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moser, Robert; Carey, Varis; Michoski, Craig
Simulations of tokamak plasmas require a number of inputs whose values are uncertain. The effects of these input uncertainties on the reliability of model predictions is of great importance when validating predictions by comparison to experimental observations, and when using the predictions for design and operation of devices. However, high fidelity simulation of tokamak plasmas, particular those aimed at characterization of the edge plasma physics, are computationally expensive, so lower cost surrogates are required to enable practical uncertainty estimates. Two surrogate modeling techniques have been explored in the context of tokamak plasma simulations using the XGC family of plasma simulationmore » codes. The first is a response surface surrogate, and the second is an augmented surrogate relying on scenario extrapolation. In addition, to reduce the costs of the XGC simulations, a particle resampling algorithm was developed, which allows marker particle distributions to be adjusted to maintain optimal importance sampling. This means that the total number of particles in and therefore the cost of a simulation can be reduced while maintaining the same accuracy.« less
A Comparison of 3D3C Velocity Measurement Techniques
NASA Astrophysics Data System (ADS)
La Foy, Roderick; Vlachos, Pavlos
2013-11-01
The velocity measurement fidelity of several 3D3C PIV measurement techniques including tomographic PIV, synthetic aperture PIV, plenoptic PIV, defocusing PIV, and 3D PTV are compared in simulations. A physically realistic ray-tracing algorithm is used to generate synthetic images of a standard calibration grid and of illuminated particle fields advected by homogeneous isotropic turbulence. The simulated images for the tomographic, synthetic aperture, and plenoptic PIV cases are then used to create three-dimensional reconstructions upon which cross-correlations are performed to yield the measured velocity field. Particle tracking algorithms are applied to the images for the defocusing PIV and 3D PTV to directly yield the three-dimensional velocity field. In all cases the measured velocity fields are compared to one-another and to the true velocity field using several metrics.
Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber
Acciarri, R.; Adams, C.; An, R.; ...
2017-03-14
Here, we present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. Lastly, we also address technical issues that arise when applying this technique to data from a large LArTPCmore » at or near ground level.« less
Molecular dynamics simulations in hybrid particle-continuum schemes: Pitfalls and caveats
NASA Astrophysics Data System (ADS)
Stalter, S.; Yelash, L.; Emamy, N.; Statt, A.; Hanke, M.; Lukáčová-Medvid'ová, M.; Virnau, P.
2018-03-01
Heterogeneous multiscale methods (HMM) combine molecular accuracy of particle-based simulations with the computational efficiency of continuum descriptions to model flow in soft matter liquids. In these schemes, molecular simulations typically pose a computational bottleneck, which we investigate in detail in this study. We find that it is preferable to simulate many small systems as opposed to a few large systems, and that a choice of a simple isokinetic thermostat is typically sufficient while thermostats such as Lowe-Andersen allow for simulations at elevated viscosity. We discuss suitable choices for time steps and finite-size effects which arise in the limit of very small simulation boxes. We also argue that if colloidal systems are considered as opposed to atomistic systems, the gap between microscopic and macroscopic simulations regarding time and length scales is significantly smaller. We propose a novel reduced-order technique for the coupling to the macroscopic solver, which allows us to approximate a non-linear stress-strain relation efficiently and thus further reduce computational effort of microscopic simulations.
NASA Technical Reports Server (NTRS)
Pavish, D. L.; Spaulding, M. L.
1977-01-01
A computer coded Lagrangian marker particle in Eulerian finite difference cell solution to the three dimensional incompressible mass transport equation, Water Advective Particle in Cell Technique, WAPIC, was developed, verified against analytic solutions, and subsequently applied in the prediction of long term transport of a suspended sediment cloud resulting from an instantaneous dredge spoil release. Numerical results from WAPIC were verified against analytic solutions to the three dimensional incompressible mass transport equation for turbulent diffusion and advection of Gaussian dye releases in unbounded uniform and uniformly sheared uni-directional flow, and for steady-uniform plug channel flow. WAPIC was utilized to simulate an analytic solution for non-equilibrium sediment dropout from an initially vertically uniform particle distribution in one dimensional turbulent channel flow.
Determining Trajectory of Triboelectrically Charged Particles, Using Discrete Element Modeling
NASA Technical Reports Server (NTRS)
2008-01-01
The Kennedy Space Center (KSC) Electrostatics and Surface Physics Laboratory is participating in an Innovative Partnership Program (IPP) project with an industry partner to modify a commercial off-the-shelf simulation software product to treat the electrodynamics of particulate systems. Discrete element modeling (DEM) is a numerical technique that can track the dynamics of particle systems. This technique, which was introduced in 1979 for analysis of rock mechanics, was recently refined to include the contact force interaction of particles with arbitrary surfaces and moving machinery. In our work, we endeavor to incorporate electrostatic forces into the DEM calculations to enhance the fidelity of the software and its applicability to (1) particle processes, such as electrophotography, that are greatly affected by electrostatic forces, (2) grain and dust transport, and (3) the study of lunar and Martian regoliths.
Large scale particle image velocimetry with helium filled soap bubbles
NASA Astrophysics Data System (ADS)
Bosbach, Johannes; Kühn, Matthias; Wagner, Claus
2009-03-01
The application of Particle Image Velocimetry (PIV) to measurement of flows on large scales is a challenging necessity especially for the investigation of convective air flows. Combining helium filled soap bubbles as tracer particles with high power quality switched solid state lasers as light sources allows conducting PIV on scales of the order of several square meters. The technique was applied to mixed convection in a full scale double aisle aircraft cabin mock-up for validation of Computational Fluid Dynamics simulations.
NASA Astrophysics Data System (ADS)
Shin, Han-Back; Jung, Joo-Young; Kim, Moo-Sub; Kim, Sunmi; Choi, Yong; Yoon, Do-Kun; Suh, Tae Suk
2018-06-01
In this study, we proposed an absorbed-dose monitoring technique using prompt gamma rays emitted from the reaction between an antiproton and a boron particle, and demonstrated the greater physical effect of the antiproton boron fusion therapy in comparison with proton beam using Monte Carlo simulation. The physical effect of the treatment, which was 3.5 times greater, was confirmed from the antiproton beam irradiation compared to the proton beam irradiation. Moreover, the prompt gamma ray image is acquired successfully during antiproton irradiation to boron regions. The results show the application feasibility of absorbed dose monitoring technique proposed in our study.
Simulating supersymmetry at the SSC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, R.M.; Haber, H.E.
1984-08-01
Careful study of supersymmetric signatures at the SSC is required in order to distinguish them from Standard Model physics backgrounds. To this end, we have created an efficient, accurate computer program which simulates supersymmetric particle production and decay (or other new particles). We have incorporated the full matrix elements, keeping track of the polarizations of all intermediate states. (At this time hadronization of final-state partons is ignored). Using Monte Carlo techniques this program can generate any desired final-state distribution or individual events for Lego plots. Examples of the results of our study of supersymmetry at SSC are provided.
The Determination of Remaining Satellite Propellant Using Measured Moments of Inertia
2006-06-01
Simulated Satellite (SimSat) were developed. These models were created using dynamic response analysis techniques on the reaction wheel and SimSat systems...Flowchart ......................................................... 5 Figure 2. Air Force Institute of Technology’s Simulated Satellite (SimSat...Determining Change in Satellite Fuel ............................... 88 xiv List of Symbols and Abbreviations ai Acceleration of a particle
NASA Astrophysics Data System (ADS)
D'Andrea, S. D.; Ng, J. Y.; Kodros, J. K.; Atwood, S. A.; Wheeler, M. J.; Macdonald, A. M.; Leaitch, W. R.; Pierce, J. R.
2015-09-01
Remote and free tropospheric aerosols represent a large fraction of the climatic influence of aerosols; however, aerosol in these regions is less characterized than those polluted boundary layers. We evaluate aerosol size distributions predicted by the GEOS-Chem-TOMAS global chemical transport model with online aerosol microphysics using measurements from the peak of Whistler Mountain, BC, Canada (2182 m a.s.l.). We evaluate the model for predictions of aerosol number, size and composition during periods of free tropospheric (FT) and boundary-layer (BL) influence at "coarse" 4° × 5° and "nested" 0.5° × 0.667° resolutions by developing simple FT/BL filtering techniques. We find that using temperature as a proxy for upslope flow (BL influence) improved the model measurement comparisons. The best threshold temperature was around 2 °C for the coarse simulations and around 6 °C for the nested simulations, with temperatures warmer than the threshold indicating boundary-layer air. Additionally, the site was increasingly likely to be in-cloud when the measured RH was above 90 %, so we do not compare the modeled and measured size distributions during these periods. With the inclusion of these temperature and RH filtering techniques, the model-measurement comparisons improved significantly. The slope of the regression for N80 (the total number of particles with particle diameter, Dp > 80 nm) in the nested simulations increased from 0.09 to 0.65, R2 increased from 0.04 to 0.46, and log-mean bias improved from 0.95 to 0.07. We also perform simulations at the nested resolution without Asian anthropogenic (AA) emissions and without biomass-burning (BB) emissions to quantify the contribution of these sources to aerosols at Whistler Peak (through comparison with simulations with these emissions on). The long-range transport of AA aerosol was found to be significant throughout all particle number concentrations, and increased the number of particles larger than 80 nm (N80) by more than 50 %, while decreasing the number of smaller particles because of suppression of new-particle formation and enhanced coagulation sink. Similarly, BB influenced Whistler Peak during summer months, with an increase in N80 exceeding 5000 cm-3. Occasionally, Whistler Peak experienced N80 > 1000 cm-3 without significant influence from AA or BB aerosol. Air masses were advected at low elevations through forested valleys during times when temperature and downwelling insolation were high, ideal conditions for formation of large sources of low-volatility biogenic secondary organic aerosol (SOA). This condensable material increased particle growth and hence N80. The low-cost filtering techniques and source apportionment used in this study can be used in other global models to give insight into the sources and processes that shape the aerosol at mountain sites, leading to a better understanding of mountain meteorology and chemistry.
Coalescence growth mechanism of ultrafine metal particles
NASA Astrophysics Data System (ADS)
Kasukabe, S.
1990-01-01
Ultrafine particles produced by a gas-evaporation technique show clear-cut crystal habits. The convection of an inert gas makes distinct growth zones in a metal smoke. The coalescence stages of hexagonal plates and multiply twinned particles are observed in the outer zone of a smoke. A model of the coalescence growth of particles with different crystal habits is proposed. Size distributions can be calculated by counting the ratio of the number of collisions by using the effective cross section of collisions and the existence probability of the volume of a particle. This simulation model makes clear the effect on the growth rate of coalescence growth derived from crystal habit.
Morphology correlation of craters formed by hypervelocity impacts
NASA Technical Reports Server (NTRS)
Crawford, Gary D.; Rose, M. Frank; Zee, Ralph H.
1993-01-01
Dust-sized olivine particles were fired at a copper plate using the Space Power Institute hypervelocity facility, simulating micrometeoroid damage from natural debris to spacecraft in low-Earth orbit (LEO). Techniques were developed for measuring crater volume, particle volume, and particle velocity, with the particle velocities ranging from 5.6 to 8.7 km/s. A roughly linear correlation was found between crater volume and particle energy which suggested that micrometeoroids follow standard hypervelocity relationships. The residual debris analysis showed that for olivine impacts of up to 8.7 km/s, particle residue is found in the crater. By using the Space Power Institute hypervelocity facility, micrometeoroid damage to satellites can be accurately modeled.
NASA Astrophysics Data System (ADS)
Joglekar, Prasad; Lim, Lawrence; Kalaskar, Sushant; Shastry, Karthik; Satyal, Suman; Weiss, Alexander
2010-10-01
Time of Flight Positron Annihilation Induced Auger Electron Spectroscopy (TOF PAES) is a surface analytical technique with high surface selectivity. Almost 95% of the PAES signal originates from the sample's topmost layer due to the trapping of positrons just above the surface in an image-potential well before annihilation. This talk presents a description of the TOF technique as the results of modeling of the charged particle transport used in the design of the 2 meter TOF-PAES system currently under construction at UTA.
Kassiopeia: a modern, extensible C++ particle tracking package
NASA Astrophysics Data System (ADS)
Furse, Daniel; Groh, Stefan; Trost, Nikolaus; Babutzka, Martin; Barrett, John P.; Behrens, Jan; Buzinsky, Nicholas; Corona, Thomas; Enomoto, Sanshiro; Erhard, Moritz; Formaggio, Joseph A.; Glück, Ferenc; Harms, Fabian; Heizmann, Florian; Hilk, Daniel; Käfer, Wolfgang; Kleesiek, Marco; Leiber, Benjamin; Mertens, Susanne; Oblath, Noah S.; Renschler, Pascal; Schwarz, Johannes; Slocum, Penny L.; Wandkowsky, Nancy; Wierman, Kevin; Zacher, Michael
2017-05-01
The Kassiopeia particle tracking framework is an object-oriented software package using modern C++ techniques, written originally to meet the needs of the KATRIN collaboration. Kassiopeia features a new algorithmic paradigm for particle tracking simulations which targets experiments containing complex geometries and electromagnetic fields, with high priority put on calculation efficiency, customizability, extensibility, and ease-of-use for novice programmers. To solve Kassiopeia's target physics problem the software is capable of simulating particle trajectories governed by arbitrarily complex differential equations of motion, continuous physics processes that may in part be modeled as terms perturbing that equation of motion, stochastic processes that occur in flight such as bulk scattering and decay, and stochastic surface processes occurring at interfaces, including transmission and reflection effects. This entire set of computations takes place against the backdrop of a rich geometry package which serves a variety of roles, including initialization of electromagnetic field simulations and the support of state-dependent algorithm-swapping and behavioral changes as a particle’s state evolves. Thanks to the very general approach taken by Kassiopeia it can be used by other experiments facing similar challenges when calculating particle trajectories in electromagnetic fields. It is publicly available at https://github.com/KATRIN-Experiment/Kassiopeia.
NASA Astrophysics Data System (ADS)
Buyong, Muhamad Ramdzan; Larki, Farhad; Takamura, Yuzuru; Majlis, Burhanuddin Yeop
2017-10-01
This paper presents the fabrication, characterization, and simulation of microelectrode arrays system with tapered profile having an aluminum surface for dielectrophoresis (DEP)-based manipulation of particles. The proposed structure demonstrates more effective electric field gradient compared with its counterpart with untapered profile. Therefore, according to the asymmetric distribution of the electric field in the active region of microelectrode, it produces more effective particle manipulation. The tapered aluminum microelectrode array (TAMA) fabrication process uses a state-of-the-art technique in the formation of the resist's taper profile. The performance of TAMA with various sidewall profile angles (5 deg to 90 deg) was analyzed through finite-element method numerical simulations to offer a better understanding of the origin of the sidewall profile effect. The ability of capturing and manipulating of the device was examined through modification of the Clausius-Mossotti factor and cross-over frequency (f). The fabricated system has been particularly implemented for filtration of particles with a desired diameter from a mixture of particles with three different diameters in an aqueous medium. The microelectrode system with tapered side wall profile offers a more efficient platform for particle manipulation and sensing applications compared with the conventional microelectrode systems.
Numerical simulation of sloshing with large deforming free surface by MPS-LES method
NASA Astrophysics Data System (ADS)
Pan, Xu-jie; Zhang, Huai-xin; Sun, Xue-yao
2012-12-01
Moving particle semi-implicit (MPS) method is a fully Lagrangian particle method which can easily solve problems with violent free surface. Although it has demonstrated its advantage in ocean engineering applications, it still has some defects to be improved. In this paper, MPS method is extended to the large eddy simulation (LES) by coupling with a sub-particle-scale (SPS) turbulence model. The SPS turbulence model turns into the Reynolds stress terms in the filtered momentum equation, and the Smagorinsky model is introduced to describe the Reynolds stress terms. Although MPS method has the advantage in the simulation of the free surface flow, a lot of non-free surface particles are treated as free surface particles in the original MPS model. In this paper, we use a new free surface tracing method and the key point is "neighbor particle". In this new method, the zone around each particle is divided into eight parts, and the particle will be treated as a free surface particle as long as there are no "neighbor particles" in any two parts of the zone. As the number density parameter judging method has a high efficiency for the free surface particles tracing, we combine it with the neighbor detected method. First, we select out the particles which may be mistreated with high probabilities by using the number density parameter judging method. And then we deal with these particles with the neighbor detected method. By doing this, the new mixed free surface tracing method can reduce the mistreatment problem efficiently. The serious pressure fluctuation is an obvious defect in MPS method, and therefore an area-time average technique is used in this paper to remove the pressure fluctuation with a quite good result. With these improvements, the modified MPS-LES method is applied to simulate liquid sloshing problems with large deforming free surface. Results show that the modified MPS-LES method can simulate the large deforming free surface easily. It can not only capture the large impact pressure accurately on rolling tank wall but also can generate all physical phenomena successfully. The good agreement between numerical and experimental results proves that the modified MPS-LES method is a good CFD methodology in free surface flow simulations.
NASA Astrophysics Data System (ADS)
Ullah, Kaleem; Liu, Xuefeng; Krasnok, Alex; Habib, Muhammad; Song, Li; Garcia-Camara, Braulio
2018-07-01
In this work, we show the spatial distribution of the scattered electromagnetic field of dielectric particles by using a new super-resolution method based on polarization modulation. Applying this technique, we were able to resolve the multipolar distribution of a Cu2O particle with a radius of 450 nm. In addition, FDTD and Mie simulations have been carried out to validate and confirm the experimental results. The results are helpful to understand the resonant modes of dielectric submicron particles which have a broad range of potential applications, such as all-optical devices or nanoantennas.
NASA Astrophysics Data System (ADS)
Shih, D.; Yeh, G.
2009-12-01
This paper applies two numerical approximations, the particle tracking technique and Galerkin finite element method, to solve the diffusive wave equation in both one-dimensional and two-dimensional flow simulations. The finite element method is one of most commonly approaches in numerical problems. It can obtain accurate solutions, but calculation times may be rather extensive. The particle tracking technique, using either single-velocity or average-velocity tracks to efficiently perform advective transport, could use larger time-step sizes than the finite element method to significantly save computational time. Comparisons of the alternative approximations are examined in this poster. We adapt the model WASH123D to examine the work. WASH123D is an integrated multimedia, multi-processes, physics-based computational model suitable for various spatial-temporal scales, was first developed by Yeh et al., at 1998. The model has evolved in design capability and flexibility, and has been used for model calibrations and validations over the course of many years. In order to deliver a locally hydrological model in Taiwan, the Taiwan Typhoon and Flood Research Institute (TTFRI) is working with Prof. Yeh to develop next version of WASH123D. So, the work of our preliminary cooperationx is also sketched in this poster.
Volumetric particle image velocimetry with a single plenoptic camera
NASA Astrophysics Data System (ADS)
Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.
2015-11-01
A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300× 200× 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289× 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera plenoptic PIV is shown to be a viable 3D/3C velocimetry technique.
Fully Resolved Simulations of Particle-Bed-Turbulence Interactions in Oscillatory Flows
NASA Astrophysics Data System (ADS)
Apte, S.; Ghodke, C.
2017-12-01
Particle-resolved direct numerical simulations (DNS) are performed to investigate the behavior of an oscillatory flow field over a bed of closely packed fixed spherical particles for a range of Reynolds numbers in transitional and rough turbulent flow regime. Presence of roughness leads to a substantial modification of the underlying boundary layer mechanism resulting in increased bed shear stress, reduction in the near-bed anisotropy, modification of the near-bed sweep and ejection motions along with marked changes in turbulent energy transport mechanisms. Characterization of such resulting flow field is performed by studying statistical descriptions of the near-bed turbulence for different roughness parameters. A double-averaging technique is employed to reveal spatial inhomogeneities at the roughness scale that provide alternate paths of energy transport in the turbulent kinetic energy (TKE) budget. Spatio-temporal characteristics of unsteady particle forces by studying their spatial distribution, temporal auto-correlations, frequency spectra, cross-correlations with near-bed turbulent flow variables and intermittency intermittency in the forces using the concept of impulse are investigated in detail. These first principle simulations provide substantial insights into the modeling of incipient motion of sediments.
Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.
2011-01-01
Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185
NASA Astrophysics Data System (ADS)
Most, S.; Jia, N.; Bijeljic, B.; Nowak, W.
2016-12-01
Pre-asymptotic characteristics are almost ubiquitous when analyzing solute transport processes in porous media. These pre-asymptotic aspects are caused by spatial coherence in the velocity field and by its heterogeneity. For the Lagrangian perspective of particle displacements, the causes of pre-asymptotic, non-Fickian transport are skewed velocity distribution, statistical dependencies between subsequent increments of particle positions (memory) and dependence between the x, y and z-components of particle increments. Valid simulation frameworks should account for these factors. We propose a particle tracking random walk (PTRW) simulation technique that can use empirical pore-space velocity distributions as input, enforces memory between subsequent random walk steps, and considers cross dependence. Thus, it is able to simulate pre-asymptotic non-Fickian transport phenomena. Our PTRW framework contains an advection/dispersion term plus a diffusion term. The advection/dispersion term produces time-series of particle increments from the velocity CDFs. These time series are equipped with memory by enforcing that the CDF values of subsequent velocities change only slightly. The latter is achieved through a random walk on the axis of CDF values between 0 and 1. The virtual diffusion coefficient for that random walk is our only fitting parameter. Cross-dependence can be enforced by constraining the random walk to certain combinations of CDF values between the three velocity components in x, y and z. We will show that this modelling framework is capable of simulating non-Fickian transport by comparison with a pore-scale transport simulation and we analyze the approach to asymptotic behavior.
NASA Astrophysics Data System (ADS)
Sirorattanakul, Krittanon; Shen, Chong; Ou-Yang, Daniel
Diffusivity governs the dynamics of interacting particles suspended in a solvent. At high particle concentration, the interactions between particles become non-negligible, making the values of self and collective diffusivity diverge and concentration-dependent. Conventional methods for measuring this dependency, such as forced Rayleigh scattering, fluorescence correlation spectroscopy (FCS), and dynamic light scattering (DLS) require preparation of multiple samples. We present a new technique to measure this dependency by using only a single sample. Dielectrophoresis (DEP) is used to create concentration gradient in the solution. Across this concentration distribution, we use FCS to measure the concentration-dependent self diffusivity. Then, we switch off DEP to allow the particles to diffuse back to equilibrium. We obtain the time series of concentration distribution from fluorescence microscopy and use them to determine the concentration-dependent collective diffusivity. We compare the experimental results with computer simulations to verify the validity of this technique. Time and spatial resolution limits of FCS and imaging are also analyzed to estimate the limitation of the proposed technique. NSF DMR-0923299, Lehigh College of Arts and Sciences Undergraduate Research Grant, Lehigh Department of Physics, Emulsion Polymers Institute.
Exploring the statistics of magnetic reconnection X-points in kinetic particle-in-cell turbulence
NASA Astrophysics Data System (ADS)
Haggerty, C. C.; Parashar, T. N.; Matthaeus, W. H.; Shay, M. A.; Yang, Y.; Wan, M.; Wu, P.; Servidio, S.
2017-10-01
Magnetic reconnection is a ubiquitous phenomenon in turbulent plasmas. It is an important part of the turbulent dynamics and heating of space and astrophysical plasmas. We examine the statistics of magnetic reconnection using a quantitative local analysis of the magnetic vector potential, previously used in magnetohydrodynamics simulations, and now employed to fully kinetic particle-in-cell (PIC) simulations. Different ways of reducing the particle noise for analysis purposes, including multiple smoothing techniques, are explored. We find that a Fourier filter applied at the Debye scale is an optimal choice for analyzing PIC data. Finally, we find a broader distribution of normalized reconnection rates compared to the MHD limit with rates as large as 0.5 but with an average of approximately 0.1.
Optimization of Energy Resolution in the Digital Hadron Calorimeter using Longitudinal Weights
NASA Astrophysics Data System (ADS)
Smith, J. R.; Bilki, B.; Francis, K.; Repond, J.; Schlereth, J.; Xia, L.
2013-04-01
Physics at a future lepton collider requires unprecedented jet energy and dijet mass resolutions. Particle Flow Algorithms (PFAs) have been proposed to achieve these. PFAs measure particles in a jet individually with the detector subsystem providing the best resolution. For this to work a calorimeter system with very high granularity is required. A prototype Digital Hadron Calorimeter (the DHCAL) based on the Resistive Plate Chamber (RPC) technology with a record count of readout channels has been developed, constructed, and exposed to particle beams. In this context, we report on a technique to improve the single hadron energy resolution by applying a set of calibration weights to the individual layers of the calorimeter. This weighting procedure was applied to approximately 1 million events in the energy range up to 60 GeV and shows an improvement in the pion energy resolution. Simulated data is used to verify particle identification techniques and to compare with the data.
Enhanced centrifuge-based approach to powder characterization
NASA Astrophysics Data System (ADS)
Thomas, Myles Calvin
Many types of manufacturing processes involve powders and are affected by powder behavior. It is highly desirable to implement tools that allow the behavior of bulk powder to be predicted based on the behavior of only small quantities of powder. Such descriptions can enable engineers to significantly improve the performance of powder processing and formulation steps. In this work, an enhancement of the centrifuge technique is proposed as a means of powder characterization. This enhanced method uses specially designed substrates with hemispherical indentations within the centrifuge. The method was tested using simulations of the momentum balance at the substrate surface. Initial simulations were performed with an ideal powder containing smooth, spherical particles distributed on substrates designed with indentations. The van der Waals adhesion between the powder, whose size distribution was based on an experimentally-determined distribution from a commercial silica powder, and the indentations was calculated and compared to the removal force created in the centrifuge. This provided a way to relate the powder size distribution to the rotational speed required for particle removal for various indentation sizes. Due to the distinct form of the data from these simulations, the cumulative size distribution of the powder and the Hamaker constant for the system were be extracted. After establishing adhesion force characterization for an ideal powder, the same proof-of-concept procedure was followed for a more realistic system with a simulated rough powder modeled as spheres with sinusoidal protrusions and intrusions around the surface. From these simulations, it was discovered that an equivalent powder of smooth spherical particles could be used to describe the adhesion behavior of the rough spherical powder by establishing a size-dependent 'effective' Hamaker constant distribution. This development made it possible to describe the surface roughness effects of the entire powder through one adjustable parameter that was linked to the size distribution. It is important to note that when the engineered substrates (hemispherical indentations) were applied, it was possible to extract both powder size distribution and effective Hamaker constant information from the simulated centrifuge adhesion experiments. Experimental validation of the simulated technique was performed with a silica powder dispersed onto a stainless steel substrate with no engineered surface features. Though the proof-of-concept work was accomplished for indented substrates, non-ideal, relatively flat (non-indented) substrates were used experimentally to demonstrate that the technique can be extended to this case. The experimental data was then used within the newly developed simulation procedure to show its application to real systems. In the absence of engineered features on the substrates, it was necessary to specify the size distribution of the powder as an input to the simulator. With this information, it was possible to extract an effective Hamaker constant distribution and when the effective Hamaker constant distribution was applied in conjunction with the size distribution, the observed adhesion force distribution was described precisely. An equation was developed that related the normalized effective Hamaker constants (normalized by the particle diameter) to the particle diameter was formulated from the effective Hamaker constant distribution. It was shown, by application of the equation, that the adhesion behavior of an ideal (smooth, spherical) powder with an experimentally-validated, effective Hamaker constant distribution could be used to effectively represent that of a realistic powder. Thus, the roughness effects and size variations of a real powder are captured in this one distributed parameter (effective Hamaker constant distribution) which provides a substantial improvement to the existing technique. This can lead to better optimization of powder processing by enhancing powder behavior models.
The Development and Comparison of Molecular Dynamics Simulation and Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Chen, Jundong
2018-03-01
Molecular dynamics is an integrated technology that combines physics, mathematics and chemistry. Molecular dynamics method is a computer simulation experimental method, which is a powerful tool for studying condensed matter system. This technique not only can get the trajectory of the atom, but can also observe the microscopic details of the atomic motion. By studying the numerical integration algorithm in molecular dynamics simulation, we can not only analyze the microstructure, the motion of particles and the image of macroscopic relationship between them and the material, but can also study the relationship between the interaction and the macroscopic properties more conveniently. The Monte Carlo Simulation, similar to the molecular dynamics, is a tool for studying the micro-molecular and particle nature. In this paper, the theoretical background of computer numerical simulation is introduced, and the specific methods of numerical integration are summarized, including Verlet method, Leap-frog method and Velocity Verlet method. At the same time, the method and principle of Monte Carlo Simulation are introduced. Finally, similarities and differences of Monte Carlo Simulation and the molecular dynamics simulation are discussed.
NASA Astrophysics Data System (ADS)
Wang, H.; Yang, Z. Y.; Lu, Y. F.
2007-02-01
Laser-assisted chemical vapor deposition was applied in fabricating three-dimensional (3D) spherical-shell photonic band gap (PBG) structures by depositing silicon shells covering silica particles, which had been self-assembled into 3D colloidal crystals. The colloidal crystals of self-assembled silica particles were formed on silicon substrates using the isothermal heating evaporation approach. A continuous wave Nd:YAG laser (1064nm wavelength) was used to deposit silicon shells by thermally decomposing disilane gas. Periodic silicon-shell/silica-particle PBG structures were obtained. By removing the silica particles enclosed in the silicon shells using hydrofluoric acid, hollow spherical silicon-shell arrays were produced. This technique is capable of fabricating structures with complete photonic band gaps, which is predicted by simulations with the plane wave method. The techniques developed in this study have the potential to flexibly engineer the positions of the PBGs by varying both the silica particle size and the silicon-shell thickness. Ellipsometry was used to investigate the specific photonic band gaps for both structures.
Physical-geometric optics method for large size faceted particles.
Sun, Bingqiang; Yang, Ping; Kattawar, George W; Zhang, Xiaodong
2017-10-02
A new physical-geometric optics method is developed to compute the single-scattering properties of faceted particles. It incorporates a general absorption vector to accurately account for inhomogeneous wave effects, and subsequently yields the relevant analytical formulas effective and computationally efficient for absorptive scattering particles. A bundle of rays incident on a certain facet can be traced as a single beam. For a beam incident on multiple facets, a systematic beam-splitting technique based on computer graphics is used to split the original beam into several sub-beams so that each sub-beam is incident only on an individual facet. The new beam-splitting technique significantly reduces the computational burden. The present physical-geometric optics method can be generalized to arbitrary faceted particles with either convex or concave shapes and with a homogeneous or an inhomogeneous (e.g., a particle with a core) composition. The single-scattering properties of irregular convex homogeneous and inhomogeneous hexahedra are simulated and compared to their counterparts from two other methods including a numerically rigorous method.
Particle-in-Cell laser-plasma simulation on Xeon Phi coprocessors
NASA Astrophysics Data System (ADS)
Surmin, I. A.; Bastrakov, S. I.; Efimenko, E. S.; Gonoskov, A. A.; Korzhimanov, A. V.; Meyerov, I. B.
2016-05-01
This paper concerns the development of a high-performance implementation of the Particle-in-Cell method for plasma simulation on Intel Xeon Phi coprocessors. We discuss the suitability of the method for Xeon Phi architecture and present our experience in the porting and optimization of the existing parallel Particle-in-Cell code PICADOR. Direct porting without code modification gives performance on Xeon Phi close to that of an 8-core CPU on a benchmark problem with 50 particles per cell. We demonstrate step-by-step optimization techniques, such as improving data locality, enhancing parallelization efficiency and vectorization leading to an overall 4.2 × speedup on CPU and 7.5 × on Xeon Phi compared to the baseline version. The optimized version achieves 16.9 ns per particle update on an Intel Xeon E5-2660 CPU and 9.3 ns per particle update on an Intel Xeon Phi 5110P. For a real problem of laser ion acceleration in targets with surface grating, where a large number of macroparticles per cell is required, the speedup of Xeon Phi compared to CPU is 1.6 ×.
DOE Office of Scientific and Technical Information (OSTI.GOV)
James K. Neathery; Gary Jacobs; Burtron H. Davis
In this reporting period, a fundamental filtration study was started to investigate the separation of Fischer-Tropsch Synthesis (FTS) liquids from iron-based catalyst particles. Slurry-phase FTS in slurry bubble column reactor systems is the preferred mode of production since the reaction is highly exothermic. Consequently, heavy wax products must be separated from catalyst particles before being removed from the reactor system. Achieving an efficient wax product separation from iron-based catalysts is one of the most challenging technical problems associated with slurry-phase FTS. The separation problem is further compounded by catalyst particle attrition and the formation of ultra-fine iron carbide and/or carbonmore » particles. Existing pilot-scale equipment was modified to include a filtration test apparatus. After undergoing an extensive plant shakedown period, filtration tests with cross-flow filter modules using simulant FTS wax slurry were conducted. The focus of these early tests was to find adequate mixtures of polyethylene wax to simulate FTS wax. Catalyst particle size analysis techniques were also developed. Initial analyses of the slurry and filter permeate particles will be used by the research team to design improved filter media and cleaning strategies.« less
NASA Astrophysics Data System (ADS)
Douillet-Grellier, Thomas; Pramanik, Ranjan; Pan, Kai; Albaiz, Abdulaziz; Jones, Bruce D.; Williams, John R.
2017-10-01
This paper develops a method for imposing stress boundary conditions in smoothed particle hydrodynamics (SPH) with and without the need for dummy particles. SPH has been used for simulating phenomena in a number of fields, such as astrophysics and fluid mechanics. More recently, the method has gained traction as a technique for simulation of deformation and fracture in solids, where the meshless property of SPH can be leveraged to represent arbitrary crack paths. Despite this interest, application of boundary conditions within the SPH framework is typically limited to imposed velocity or displacement using fictitious dummy particles to compensate for the lack of particles beyond the boundary interface. While this is enough for a large variety of problems, especially in the case of fluid flow, for problems in solid mechanics there is a clear need to impose stresses upon boundaries. In addition to this, the use of dummy particles to impose a boundary condition is not always suitable or even feasibly, especially for those problems which include internal boundaries. In order to overcome these difficulties, this paper first presents an improved method for applying stress boundary conditions in SPH with dummy particles. This is then followed by a proposal of a formulation which does not require dummy particles. These techniques are then validated against analytical solutions to two common problems in rock mechanics, the Brazilian test and the penny-shaped crack problem both in 2D and 3D. This study highlights the fact that SPH offers a good level of accuracy to solve these problems and that results are reliable. This validation work serves as a foundation for addressing more complex problems involving plasticity and fracture propagation.
NASA Technical Reports Server (NTRS)
Stern, Boris E.; Svensson, Roland; Begelman, Mitchell C.; Sikora, Marek
1995-01-01
High-energy radiation processes in compact cosmic objects are often expected to have a strongly non-linear behavior. Such behavior is shown, for example, by electron-positron pair cascades and the time evolution of relativistic proton distributions in dense radiation fields. Three independent techniques have been developed to simulate these non-linear problems: the kinetic equation approach; the phase-space density (PSD) Monte Carlo method; and the large-particle (LP) Monte Carlo method. In this paper, we present the latest version of the LP method and compare it with the other methods. The efficiency of the method in treating geometrically complex problems is illustrated by showing results of simulations of 1D, 2D and 3D systems. The method is shown to be powerful enough to treat non-spherical geometries, including such effects as bulk motion of the background plasma, reflection of radiation from cold matter, and anisotropic distributions of radiating particles. It can therefore be applied to simulate high-energy processes in such astrophysical systems as accretion discs with coronae, relativistic jets, pulsar magnetospheres and gamma-ray bursts.
NASA Astrophysics Data System (ADS)
Alizadeh Behjani, Mohammadreza; Hassanpour, Ali; Ghadiri, Mojtaba; Bayly, Andrew
2017-06-01
Segregation of granules is an undesired phenomenon in which particles in a mixture separate from each other based on the differences in their physical and chemical properties. It is, therefore, crucial to control the homogeneity of the system by applying appropriate techniques. This requires a fundamental understanding of the underlying mechanisms. In this study, the effect of particle shape and cohesion has been analysed. As a model system prone to segregation, a ternary mixture of particles representing the common ingredients of home washing powders, namely, spray dried detergent powders, tetraacetylethylenediamine, and enzyme placebo (as the minor ingredient) during heap formation is modelled numerically by the Discrete Element Method (DEM) with an aim to investigate the effect of cohesion/adhesion of the minor components on segregation quality. Non-spherical particle shapes are created in DEM using the clumped-sphere method based on their X-ray tomograms. Experimentally, inter particle adhesion is generated by coating the minor ingredient (enzyme placebo) with Polyethylene Glycol 400 (PEG 400). The JKR theory is used to model the cohesion/adhesion of coated enzyme placebo particles in the simulation. Tests are carried out experimentally and simulated numerically by mixing the placebo particles (uncoated and coated) with the other ingredients and pouring them in a test box. The simulation and experimental results are compared qualitatively and quantitatively. It is found that coating the minor ingredient in the mixture reduces segregation significantly while the change in flowability of the system is negligible.
A numerical framework for the direct simulation of dense particulate flow under explosive dispersal
NASA Astrophysics Data System (ADS)
Mo, H.; Lien, F.-S.; Zhang, F.; Cronin, D. S.
2018-05-01
In this paper, we present a Cartesian grid-based numerical framework for the direct simulation of dense particulate flow under explosive dispersal. This numerical framework is established through the integration of the following numerical techniques: (1) operator splitting for partitioned fluid-solid interaction in the time domain, (2) the second-order SSP Runge-Kutta method and third-order WENO scheme for temporal and spatial discretization of governing equations, (3) the front-tracking method for evolving phase interfaces, (4) a field function proposed for low-memory-cost multimaterial mesh generation and fast collision detection, (5) an immersed boundary method developed for treating arbitrarily irregular and changing boundaries, and (6) a deterministic multibody contact and collision model. Employing the developed framework, this paper further studies particle jet formation under explosive dispersal by considering the effects of particle properties, particulate payload morphologies, and burster pressures. By the simulation of the dispersal processes of dense particle systems driven by pressurized gas, in which the driver pressure reaches 1.01325× 10^{10} Pa (10^5 times the ambient pressure) and particles are impulsively accelerated from stationary to a speed that is more than 12000 m/s within 15 μ s, it is demonstrated that the presented framework is able to effectively resolve coupled shock-shock, shock-particle, and particle-particle interactions in complex fluid-solid systems with shocked flow conditions, arbitrarily irregular particle shapes, and realistic multibody collisions.
Multi-Objective Bidding Strategy for Genco Using Non-Dominated Sorting Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Saksinchai, Apinat; Boonchuay, Chanwit; Ongsakul, Weerakorn
2010-06-01
This paper proposes a multi-objective bidding strategy for a generation company (GenCo) in uniform price spot market using non-dominated sorting particle swarm optimization (NSPSO). Instead of using a tradeoff technique, NSPSO is introduced to solve the multi-objective strategic bidding problem considering expected profit maximization and risk (profit variation) minimization. Monte Carlo simulation is employed to simulate rivals' bidding behavior. Test results indicate that the proposed approach can provide the efficient non-dominated solution front effectively. In addition, it can be used as a decision making tool for a GenCo compromising between expected profit and price risk in spot market.
CELES: CUDA-accelerated simulation of electromagnetic scattering by large ensembles of spheres
NASA Astrophysics Data System (ADS)
Egel, Amos; Pattelli, Lorenzo; Mazzamuto, Giacomo; Wiersma, Diederik S.; Lemmer, Uli
2017-09-01
CELES is a freely available MATLAB toolbox to simulate light scattering by many spherical particles. Aiming at high computational performance, CELES leverages block-diagonal preconditioning, a lookup-table approach to evaluate costly functions and massively parallel execution on NVIDIA graphics processing units using the CUDA computing platform. The combination of these techniques allows to efficiently address large electrodynamic problems (>104 scatterers) on inexpensive consumer hardware. In this paper, we validate near- and far-field distributions against the well-established multi-sphere T-matrix (MSTM) code and discuss the convergence behavior for ensembles of different sizes, including an exemplary system comprising 105 particles.
IUTAM Symposium on Hydrodynamic Diffusion of Suspended Particles
NASA Technical Reports Server (NTRS)
Davis, R. H.
1995-01-01
The focus of the symposium was on multiparticle hydrodynamic interactions which lead to fluctuating motion of the particles and resulting particle migration and dispersion or diffusion. Implications of these phenomena were described for sedimentation, fluidization, suspension flows, granular flows, and fiber suspensions. Computer simulation techniques as well as experimental techniques were described. Each session had an invited leadoff talk which overviewed the session topic as well as described the speaker's own related research. Ample time for discussion was included after each talk as well as at the end of each session. The symposium started with a keynote talk on the first evening on What is so puzzling about hydrodynamic diffusion?, which set the tone for the rest of the meeting by emphasizing both recent advances and unanswered issues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, Andres
Transport and reaction in zeolites and other porous materials, such as mesoporous silica particles, has been a focus of interest in recent years. This is in part due to the possibility of anomalous transport effects (e.g. single-file diffusion) and its impact in the reaction yield in catalytic processes. Computational simulations are often used to study these complex nonequilibrium systems. Computer simulations using Molecular Dynamics (MD) techniques are prohibitive, so instead coarse grained one-dimensional models with the aid of Kinetic Monte Carlo (KMC) simulations are used. Both techniques can be computationally expensive, both time and resource wise. These coarse-grained systems canmore » be exactly described by a set of coupled stochastic master equations, that describe the reaction-diffusion kinetics of the system. The equations can be written exactly, however, coupling between the equations and terms within the equations make it impossible to solve them exactly; approximations must be made. One of the most common methods to obtain approximate solutions is to use Mean Field (MF) theory. MF treatments yield reasonable results at high ratios of reaction rate k to hop rate h of the particles, but fail completely at low k=h due to the over-estimation of fluxes of particles within the pore. We develop a method to estimate fluxes and intrapore diffusivity in simple one- dimensional reaction-diffusion models at high and low k=h, where the pores are coupled to an equilibrated three-dimensional fluid. We thus successfully describe analytically these simple reaction-diffusion one-dimensional systems. Extensions to models considering behavior with long range steric interactions and wider pores require determination of multiple boundary conditions. We give a prescription to estimate the required parameters for these simulations. For one dimensional systems, if single-file diffusion is relaxed, additional parameters to describe particle exchange have to be introduced. We use Langevin Molecular Dynamics (MD) simulations to assess these parameters.« less
Multi-fluid CFD analysis in Process Engineering
NASA Astrophysics Data System (ADS)
Hjertager, B. H.
2017-12-01
An overview of modelling and simulation of flow processes in gas/particle and gas/liquid systems are presented. Particular emphasis is given to computational fluid dynamics (CFD) models that use the multi-dimensional multi-fluid techniques. Turbulence modelling strategies for gas/particle flows based on the kinetic theory for granular flows are given. Sub models for the interfacial transfer processes and chemical kinetics modelling are presented. Examples are shown for some gas/particle systems including flow and chemical reaction in risers as well as gas/liquid systems including bubble columns and stirred tanks.
NASA Astrophysics Data System (ADS)
Bowen, James M.
The goal of this research was to investigate the physicochemical properties of weapons grade plutonium particles originating from the 1960 BOMARC incident for the purpose of predicting their fate in the environment and to address radiation protection and nuclear security concerns. Methods were developed to locate and isolate the particles in order to characterize them. Physical, chemical, and radiological characterization was performed using a variety of techniques. And finally, the particles were subjected to a sequential extraction procedure, a series of increasingly aggressive reagents, to simulate an accelerated environmental exposure. A link between the morphology of the particles and their partitioning amongst environmental mechanisms was established.
Movement and collision of Lagrangian particles in hydro-turbine intakes: a case study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero-Gomez, Pedro; Richmond, Marshall C.
Studies of the stress/survival of migratory fish during downstream passage through operating hydro-turbines are normally conducted to determine the fish-friendliness of units. One field approach consisting of recording extreme hydraulics with autonomous sensors is largely sensitive to the conditions of sensor release and the initial trajectories at the turbine intake. This study applies a modelling strategy based on flow simulations using computational fluid dynamics and Lagrangian particle tracking to represent the travel of live fish and autonomous sensor devices through hydro-turbine intakes. For the flow field calculation, the simulations were conducted with both a time-averaging turbulence model and an eddy-resolvingmore » technique. For the particle tracking calculation, different modelling assumptions for turbulence forcing, mass formulation, buoyancy, and release condition were tested. The modelling assumptions are evaluated with respect to data sets collected using a laboratory physical model and an autonomous sensor device deployed at Ice Harbor Dam (Snake River, State of Washington, U.S.A.) at the same discharge and release point as in the present computer simulations. We found an acceptable agreement between the simulated results and observed data and discuss relevant features of Lagrangian particle movement that are critical in turbine design and in the experimental design of field studies.« less
Development of Modeling and Simulation for Magnetic Particle Inspection Using Finite Elements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jun-Youl
2003-01-01
Magnetic particle inspection (MPI) is a widely used nondestructive inspection method for aerospace applications essentially limited to experiment-based approaches. The analysis of MPI characteristics that affect sensitivity and reliability contributes not only reductions in inspection design cost and time but also improvement of analysis of experimental data. Magnetic particles are easily attracted toward a high magnetic field gradient. Selection of a magnetic field source, which produces a magnetic field gradient large enough to detect a defect in a test sample or component, is an important factor in magnetic particle inspection. In this work a finite element method (FEM) has beenmore » employed for numerical calculation of the MPI simulation technique. The FEM method is known to be suitable for complicated geometries such as defects in samples. This thesis describes the research that is aimed at providing a quantitative scientific basis for magnetic particle inspection. A new FEM solver for MPI simulation has been developed in this research for not only nonlinear reversible permeability materials but also irreversible hysteresis materials that are described by the Jiles-Atherton model. The material is assumed to have isotropic ferromagnetic properties in this research (i.e., the magnetic properties of the material are identical in all directions in a single crystal). In the research, with a direct current field mode, an MPI situation has been simulated to measure the estimated volume of magnetic particles around defect sites before and after removing any external current fields. Currently, this new MPI simulation package is limited to solving problems with the single current source from either a solenoid or an axial directional current rod.« less
Global Magnetosphere Modeling With Kinetic Treatment of Magnetic Reconnection
NASA Astrophysics Data System (ADS)
Toth, G.; Chen, Y.; Gombosi, T. I.; Cassak, P.; Markidis, S.; Peng, B.; Henderson, M. G.
2017-12-01
Global magnetosphere simulations with a kinetic treatment of magnetic reconnection are very challenging because of the large separation of global and kinetic scales. We have developed two algorithms that can overcome these difficulties: 1) the two-way coupling of the global magnetohydrodynamic code with an embedded particle-in-cell model (MHD-EPIC) and 2) the artificial increase of the ion and electron kinetic scales. Both of these techniques improve the efficiency of the simulations by many orders of magnitude. We will describe the techniques and show that they provide correct and meaningful results. Using the coupled model and the increased kinetic scales, we will present global magnetosphere simulations with the PIC domains covering the dayside and/or tail reconnection sites. The simulation results will be compared to and validated with MMS observations.
Medical Applications at CERN and the ENLIGHT Network
Dosanjh, Manjit; Cirilli, Manuela; Myers, Steve; Navin, Sparsh
2016-01-01
State-of-the-art techniques derived from particle accelerators, detectors, and physics computing are routinely used in clinical practice and medical research centers: from imaging technologies to dedicated accelerators for cancer therapy and nuclear medicine, simulations, and data analytics. Principles of particle physics themselves are the foundation of a cutting edge radiotherapy technique for cancer treatment: hadron therapy. This article is an overview of the involvement of CERN, the European Organization for Nuclear Research, in medical applications, with specific focus on hadron therapy. It also presents the history, achievements, and future scientific goals of the European Network for Light Ion Hadron Therapy, whose co-ordination office is at CERN. PMID:26835422
Medical Applications at CERN and the ENLIGHT Network.
Dosanjh, Manjit; Cirilli, Manuela; Myers, Steve; Navin, Sparsh
2016-01-01
State-of-the-art techniques derived from particle accelerators, detectors, and physics computing are routinely used in clinical practice and medical research centers: from imaging technologies to dedicated accelerators for cancer therapy and nuclear medicine, simulations, and data analytics. Principles of particle physics themselves are the foundation of a cutting edge radiotherapy technique for cancer treatment: hadron therapy. This article is an overview of the involvement of CERN, the European Organization for Nuclear Research, in medical applications, with specific focus on hadron therapy. It also presents the history, achievements, and future scientific goals of the European Network for Light Ion Hadron Therapy, whose co-ordination office is at CERN.
NASA Astrophysics Data System (ADS)
Ozdemir, Ozan C.; Widener, Christian A.; Carter, Michael J.; Johnson, Kyle W.
2017-10-01
As the industrial application of the cold spray technology grows, the need to optimize both the cost and the quality of the process grows with it. Parameter selection techniques available today require the use of a coupled system of equations to be solved to involve the losses due to particle loading in the gas stream. Such analyses cause a significant increase in the computational time in comparison with calculations with isentropic flow assumptions. In cold spray operations, engineers and operators may, therefore, neglect the effects of particle loading to simplify the multiparameter optimization process. In this study, two-way coupled (particle-fluid) quasi-one-dimensional fluid dynamics simulations are used to test the particle loading effects under many potential cold spray scenarios. Output of the simulations is statistically analyzed to build regression models that estimate the changes in particle impact velocity and temperature due to particle loading. This approach eases particle loading optimization for more complete analysis on deposition cost and time. The model was validated both numerically and experimentally. Further numerical analyses were completed to test the particle loading capacity and limitations of a nozzle with a commonly used throat size. Additional experimentation helped document the physical limitations to high-rate deposition.
Explosive particle soil surface dispersion model for detonated military munitions.
Hathaway, John E; Rishel, Jeremy P; Walsh, Marianne E; Walsh, Michael R; Taylor, Susan
2015-07-01
The accumulation of high explosive mass residue from the detonation of military munitions on training ranges is of environmental concern because of its potential to contaminate the soil, surface water, and groundwater. The US Department of Defense wants to quantify, understand, and remediate high explosive mass residue loadings that might be observed on active firing ranges. Previously, efforts using various sampling methods and techniques have resulted in limited success, due in part to the complicated dispersion pattern of the explosive particle residues upon detonation. In our efforts to simulate particle dispersal for high- and low-order explosions on hypothetical firing ranges, we use experimental particle data from detonations of munitions from a 155-mm howitzer, which are common military munitions. The mass loadings resulting from these simulations provide a previously unattained level of detail to quantify the explosive residue source-term for use in soil and water transport models. In addition, the resulting particle placements can be used to test, validate, and optimize particle sampling methods and statistical models as applied to firing ranges. Although the presented results are for a hypothetical 155-mm howitzer firing range, the method can be used for other munition types once the explosive particle characteristics are known.
NASA Astrophysics Data System (ADS)
D'Andrea, S. D.; Ng, J. Y.; Kodros, J. K.; Atwood, S. A.; Wheeler, M. J.; Macdonald, A. M.; Leaitch, W. R.; Pierce, J. R.
2016-01-01
Remote and free-tropospheric aerosols represent a large fraction of the climatic influence of aerosols; however, aerosol in these regions is less characterized than those polluted boundary layers. We evaluate aerosol size distributions predicted by the GEOS-Chem-TOMAS global chemical transport model with online aerosol microphysics using measurements from the peak of Whistler Mountain, British Columbia, Canada (2182 m a.s.l., hereafter referred to as Whistler Peak). We evaluate the model for predictions of aerosol number, size, and composition during periods of free-tropospheric (FT) and boundary-layer (BL) influence at "coarse" 4° × 5° and "nested" 0.5° × 0.667° resolutions by developing simple FT/BL filtering techniques. We find that using temperature as a proxy for upslope flow (BL influence) improved the model-measurement comparisons. The best threshold temperature was around 2 °C for the coarse simulations and around 6 °C for the nested simulations, with temperatures warmer than the threshold indicating boundary-layer air. Additionally, the site was increasingly likely to be in cloud when the measured relative humidity (RH) was above 90 %, so we do not compare the modeled and measured size distributions during these periods. With the inclusion of these temperature and RH filtering techniques, the model-measurement comparisons improved significantly. The slope of the regression for N80 (the total number of particles with particle diameter, Dp, > 80 nm) in the nested simulations increased from 0.09 to 0.65, R2 increased from 0.04 to 0.46, and log-mean bias improved from 0.95 to 0.07. We also perform simulations at the nested resolution without Asian anthropogenic emissions and without biomass-burning emissions to quantify the contribution of these sources to aerosols at Whistler Peak (through comparison with simulations with these emissions on). The long-range transport of Asian anthropogenic aerosol was found to be significant throughout all particle number concentrations, and increased N80 by more than 50 %, while decreasing the number of smaller particles because of suppression of new-particle formation and enhanced coagulation sink. Similarly, biomass burning influenced Whistler Peak during summer months, with an increase in N80 exceeding 5000 cm-3. Occasionally, Whistler Peak experienced N80 > 1000 cm-3 without significant influence from Asian anthropogenic or biomass-burning aerosol. Air masses were advected at low elevations through forested valleys during times when temperature and downwelling insolation were high, ideal conditions for formation of large sources of low-volatility biogenic secondary organic aerosol (SOA). This condensable material increased particle growth and hence N80. The low-cost filtering techniques and source apportionment used in this study can be used in other global models to give insight into the sources and processes that shape the aerosol at mountain sites, leading to a better understanding of mountain meteorology and chemistry.
NASA Astrophysics Data System (ADS)
Orkoulas, Gerassimos; Panagiotopoulos, Athanassios Z.
1994-07-01
In this work, we investigate the liquid-vapor phase transition of the restricted primitive model of ionic fluids. We show that at the low temperatures where the phase transition occurs, the system cannot be studied by conventional molecular simulation methods because convergence to equilibrium is slow. To accelerate convergence, we propose cluster Monte Carlo moves capable of moving more than one particle at a time. We then address the issue of charged particle transfers in grand canonical and Gibbs ensemble Monte Carlo simulations, for which we propose a biased particle insertion/destruction scheme capable of sampling short interparticle distances. We compute the chemical potential for the restricted primitive model as a function of temperature and density from grand canonical Monte Carlo simulations and the phase envelope from Gibbs Monte Carlo simulations. Our calculated phase coexistence curve is in agreement with recent results of Caillol obtained on the four-dimensional hypersphere and our own earlier Gibbs ensemble simulations with single-ion transfers, with the exception of the critical temperature, which is lower in the current calculations. Our best estimates for the critical parameters are T*c=0.053, ρ*c=0.025. We conclude with possible future applications of the biased techniques developed here for phase equilibrium calculations for ionic fluids.
NASA Astrophysics Data System (ADS)
Oono, Naoko; Ukai, Shigeharu; Kondo, Sosuke; Hashitomi, Okinobu; Kimura, Akihiko
2015-10-01
Oxide particle dispersion strengthened (ODS) Ni-base alloys are irradiated by using simulation technique (Fe/He dual-ion irradiation) to investigate the reliability to Gen. IV high-temperature reactors. The fine oxide particles with less than 10 nm in average size and approximately 8.0 × 1022 m-3 in number density remained after 101 dpa irradiation. The tiny helium bubbles were inside grains, not at grain-boundaries; it is advantageous effect of oxide particles which trap the helium atoms at the particle-matrix interface. Ni-base ODS alloys demonstrated their great ability to overcome He embrittlement.
NASA Technical Reports Server (NTRS)
Buehler, Martin G. (Inventor); Nixon, Robert H. (Inventor); Soli, George A. (Inventor); Blaes, Brent R. (Inventor)
1995-01-01
A method for predicting the SEU susceptibility of a standard-cell D-latch using an alpha-particle sensitive SRAM, SPICE critical charge simulation results, and alpha-particle interaction physics. A technique utilizing test structures to quickly and inexpensively characterize the SEU sensitivity of standard cell latches intended for use in a space environment. This bench-level approach utilizes alpha particles to induce upsets in a low LET sensitive 4-k bit test SRAM. This SRAM consists of cells that employ an offset voltage to adjust their upset sensitivity and an enlarged sensitive drain junction to enhance the cell's upset rate.
Shearing Low-frictional 3D Granular Materials
NASA Astrophysics Data System (ADS)
Chen, David; Zheng, Hu; Behringer, Robert
Shear jamming occurs in frictional particles over a range of packing fractions, from random loose to random dense. Simulations show shear jamming for frictionless spheres, but over a vanishing range as the system size grows. We use packings of submerged and diffractive index-matched hydrogel particles to determine the shear-induced microscopic response of 3D, low-frictional granular systems near jamming, bridging the gap between frictionless and low friction packings. We visualize the particles by a laser scanning technique, and we track particle motion along with their interparticle contact forces from its 3D-reconstructions. NSF-DMF-1206351, NASA NNX15AD38G, William M. Keck Foundation, and DARPA.
Fast emulation of track reconstruction in the CMS simulation
NASA Astrophysics Data System (ADS)
Komm, Matthias; CMS Collaboration
2017-10-01
Simulated samples of various physics processes are a key ingredient within analyses to unlock the physics behind LHC collision data. Samples with more and more statistics are required to keep up with the increasing amounts of recorded data. During sample generation, significant computing time is spent on the reconstruction of charged particle tracks from energy deposits which additionally scales with the pileup conditions. In CMS, the FastSimulation package is developed for providing a fast alternative to the standard simulation and reconstruction workflow. It employs various techniques to emulate track reconstruction effects in particle collision events. Several analysis groups in CMS are utilizing the package, in particular those requiring many samples to scan the parameter space of physics models (e.g. SUSY) or for the purpose of estimating systematic uncertainties. The strategies for and recent developments in this emulation are presented, including a novel, flexible implementation of tracking emulation while retaining a sufficient, tuneable accuracy.
A mesoscopic simulation on distributions of red blood cells in a bifurcating channel
NASA Astrophysics Data System (ADS)
Inoue, Yasuhiro; Takagi, Shu; Matsumoto, Yoichiro
2004-11-01
Transports of red blood cells (RBCs) or particles in bifurcated channels have been attracting renewed interest since the advent of concepts of MEMS for sorting, analyzing, and removing cells or particles from sample medium. In this talk, we present a result on a transport of red blood cells (RBCs) in a bifurcating channel studied by using a mesoscale simulation technique of immiscible droplets, where RBCs have been modeled as immiscible droplets. The distribution of RBCs is represented by the fractional RBC flux into two daughters as a function of volumetric flow ratio between the daughters. The data obtained in our simulations are examined with a theoretical prediction, in which, we assume an exponential distribution for positions of RBCs in the mother channel. The theoretical predictions show a good agreement with simulation results. A non-uniform distribution of RBCs in the mother channel affects disproportional separation of RBC flux at a bifurcation.
New methods in WARP, a particle-in-cell code for space-charge dominated beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grote, D., LLNL
1998-01-12
The current U.S. approach for a driver for inertial confinement fusion power production is a heavy-ion induction accelerator; high-current beams of heavy ions are focused onto the fusion target. The space-charge of the high-current beams affects the behavior more strongly than does the temperature (the beams are described as being ``space-charge dominated``) and the beams behave like non-neutral plasmas. The particle simulation code WARP has been developed and used to study the transport and acceleration of space-charge dominated ion beams in a wide range of applications, from basic beam physics studies, to ongoing experiments, to fusion driver concepts. WARP combinesmore » aspects of a particle simulation code and an accelerator code; it uses multi-dimensional, electrostatic particle-in-cell (PIC) techniques and has a rich mechanism for specifying the lattice of externally applied fields. There are both two- and three-dimensional versions, the former including axisymmetric (r-z) and transverse slice (x-y) models. WARP includes a number of novel techniques and capabilities that both enhance its performance and make it applicable to a wide range of problems. Some of these have been described elsewhere. Several recent developments will be discussed in this paper. A transverse slice model has been implemented with the novel capability of including bends, allowing more rapid simulation while retaining essential physics. An interface using Python as the interpreter layer instead of Basis has been developed. A parallel version of WARP has been developed using Python.« less
NASA Astrophysics Data System (ADS)
Ahuja, V. R.; van der Gucht, J.; Briels, W. J.
2018-01-01
We present a novel coarse-grain particle-based simulation technique for modeling self-developing flow of dilute and semi-dilute polymer solutions. The central idea in this paper is the two-way coupling between a mesoscopic polymer model and a phenomenological fluid model. As our polymer model, we choose Responsive Particle Dynamics (RaPiD), a Brownian dynamics method, which formulates the so-called "conservative" and "transient" pair-potentials through which the polymers interact besides experiencing random forces in accordance with the fluctuation dissipation theorem. In addition to these interactions, our polymer blobs are also influenced by the background solvent velocity field, which we calculate by solving the Navier-Stokes equation discretized on a moving grid of fluid blobs using the Smoothed Particle Hydrodynamics (SPH) technique. While the polymers experience this frictional force opposing their motion relative to the background flow field, our fluid blobs also in turn are influenced by the motion of the polymers through an interaction term. This makes our technique a two-way coupling algorithm. We have constructed this interaction term in such a way that momentum is conserved locally, thereby preserving long range hydrodynamics. Furthermore, we have derived pairwise fluctuation terms for the velocities of the fluid blobs using the Fokker-Planck equation, which have been alternatively derived using the General Equation for the Non-Equilibrium Reversible-Irreversible Coupling (GENERIC) approach in Smoothed Dissipative Particle Dynamics (SDPD) literature. These velocity fluctuations for the fluid may be incorporated into the velocity updates for our fluid blobs to obtain a thermodynamically consistent distribution of velocities. In cases where these fluctuations are insignificant, however, these additional terms may well be dropped out as they are in a standard SPH simulation. We have applied our technique to study the rheology of two different concentrations of our model linear polymer solutions. The results show that the polymers and the fluid are coupled very well with each other, showing no lag between their velocities. Furthermore, our results show non-Newtonian shear thinning and the characteristic flattening of the Poiseuille flow profile typically observed for polymer solutions.
Ahuja, V R; van der Gucht, J; Briels, W J
2018-01-21
We present a novel coarse-grain particle-based simulation technique for modeling self-developing flow of dilute and semi-dilute polymer solutions. The central idea in this paper is the two-way coupling between a mesoscopic polymer model and a phenomenological fluid model. As our polymer model, we choose Responsive Particle Dynamics (RaPiD), a Brownian dynamics method, which formulates the so-called "conservative" and "transient" pair-potentials through which the polymers interact besides experiencing random forces in accordance with the fluctuation dissipation theorem. In addition to these interactions, our polymer blobs are also influenced by the background solvent velocity field, which we calculate by solving the Navier-Stokes equation discretized on a moving grid of fluid blobs using the Smoothed Particle Hydrodynamics (SPH) technique. While the polymers experience this frictional force opposing their motion relative to the background flow field, our fluid blobs also in turn are influenced by the motion of the polymers through an interaction term. This makes our technique a two-way coupling algorithm. We have constructed this interaction term in such a way that momentum is conserved locally, thereby preserving long range hydrodynamics. Furthermore, we have derived pairwise fluctuation terms for the velocities of the fluid blobs using the Fokker-Planck equation, which have been alternatively derived using the General Equation for the Non-Equilibrium Reversible-Irreversible Coupling (GENERIC) approach in Smoothed Dissipative Particle Dynamics (SDPD) literature. These velocity fluctuations for the fluid may be incorporated into the velocity updates for our fluid blobs to obtain a thermodynamically consistent distribution of velocities. In cases where these fluctuations are insignificant, however, these additional terms may well be dropped out as they are in a standard SPH simulation. We have applied our technique to study the rheology of two different concentrations of our model linear polymer solutions. The results show that the polymers and the fluid are coupled very well with each other, showing no lag between their velocities. Furthermore, our results show non-Newtonian shear thinning and the characteristic flattening of the Poiseuille flow profile typically observed for polymer solutions.
1988-06-30
equation using finite difference methods. The distribution function is represented by a large number of particles. The particle’s velocities change as a...Small angle Coulomb collisions The FP equation for describing small angle Coulomb collisions can be solved numerically using finite difference techniques...A finite Fourrier transform (FT) is made in z, then we can solve for each k using the following finite difference scheme [5]: 2{r 1 +l1 2 (,,+ 1 - fj
Smoothed particle hydrodynamics method for evaporating multiphase flows.
Yang, Xiufeng; Kong, Song-Charng
2017-09-01
The smoothed particle hydrodynamics (SPH) method has been increasingly used for simulating fluid flows; however, its ability to simulate evaporating flow requires significant improvements. This paper proposes an SPH method for evaporating multiphase flows. The present SPH method can simulate the heat and mass transfers across the liquid-gas interfaces. The conservation equations of mass, momentum, and energy were reformulated based on SPH, then were used to govern the fluid flow and heat transfer in both the liquid and gas phases. The continuity equation of the vapor species was employed to simulate the vapor mass fraction in the gas phase. The vapor mass fraction at the interface was predicted by the Clausius-Clapeyron correlation. An evaporation rate was derived to predict the mass transfer from the liquid phase to the gas phase at the interface. Because of the mass transfer across the liquid-gas interface, the mass of an SPH particle was allowed to change. Alternative particle splitting and merging techniques were developed to avoid large mass difference between SPH particles of the same phase. The proposed method was tested by simulating three problems, including the Stefan problem, evaporation of a static drop, and evaporation of a drop impacting a hot surface. For the Stefan problem, the SPH results of the evaporation rate at the interface agreed well with the analytical solution. For drop evaporation, the SPH result was compared with the result predicted by a level-set method from the literature. In the case of drop impact on a hot surface, the evolution of the shape of the drop, temperature, and vapor mass fraction were predicted.
Multiscale simulation of molecular processes in cellular environments.
Chiricotto, Mara; Sterpone, Fabio; Derreumaux, Philippe; Melchionna, Simone
2016-11-13
We describe the recent advances in studying biological systems via multiscale simulations. Our scheme is based on a coarse-grained representation of the macromolecules and a mesoscopic description of the solvent. The dual technique handles particles, the aqueous solvent and their mutual exchange of forces resulting in a stable and accurate methodology allowing biosystems of unprecedented size to be simulated.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2016 The Author(s).
Jungnickel, H; Pund, R; Tentschert, J; Reichardt, P; Laux, P; Harbach, H; Luch, A
2016-09-01
Plastic particles smaller than 5mm, so called microplastics have the capability to accumulate in rivers, lakes and the marine environment and therefore have begun to be considered in eco-toxicology and human health risk assessment. Environmental microplastic contaminants may originate from consumer products like body wash, tooth pastes and cosmetic products, but also from degradation of plastic waste; they represent a potential but unpredictable threat to aquatic organisms and possibly also to humans. We investigated exemplarily for polyethylene (PE), the most abundant constituent of microplastic particles in the environment, whether such fragments could be produced from larger pellets (2mm×6mm). So far only few analytical methods exist to identify microplastic particles smaller than 10μm, especially no imaging mass spectrometry technique. We used at first time-of-flight secondary ion mass spectrometry (ToF-SIMS) for analysis and imaging of small PE-microplastic particles directly in the model system Ottawa sand during exposure to sea surf simulation. As a prerequisite, a method for identification of PE was established by identification of characteristic ions for PE out of an analysis of grinded polymer samples. The method was applied onto Ottawa sand in order to investigate the influence of simulated environmental conditions on particle transformation. A severe degradation of the primary PE pellet surface, associated with the transformation of larger particles into smaller ones already after 14days of sea surf simulation, was observed. Within the subsequent period of 14days to 1month of exposure the number of detected smallest-sized particles increased significantly (50%) while the second smallest fraction increased even further to 350%. Results were verified using artificially degraded PE pellets and Ottawa sand. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ovaysi, S.; Piri, M.
2009-12-01
We present a three-dimensional fully dynamic parallel particle-based model for direct pore-level simulation of incompressible viscous fluid flow in disordered porous media. The model was developed from scratch and is capable of simulating flow directly in three-dimensional high-resolution microtomography images of naturally occurring or man-made porous systems. It reads the images as input where the position of the solid walls are given. The entire medium, i.e., solid and fluid, is then discretized using particles. The model is based on Moving Particle Semi-implicit (MPS) technique. We modify this technique in order to improve its stability. The model handles highly irregular fluid-solid boundaries effectively. It takes into account viscous pressure drop in addition to the gravity forces. It conserves mass and can automatically detect any false connectivity with fluid particles in the neighboring pores and throats. It includes a sophisticated algorithm to automatically split and merge particles to maintain hydraulic connectivity of extremely narrow conduits. Furthermore, it uses novel methods to handle particle inconsistencies and open boundaries. To handle the computational load, we present a fully parallel version of the model that runs on distributed memory computer clusters and exhibits excellent scalability. The model is used to simulate unsteady-state flow problems under different conditions starting from straight noncircular capillary tubes with different cross-sectional shapes, i.e., circular/elliptical, square/rectangular and triangular cross-sections. We compare the predicted dimensionless hydraulic conductances with the data available in the literature and observe an excellent agreement. We then test the scalability of our parallel model with two samples of an artificial sandstone, samples A and B, with different volumes and different distributions (non-uniform and uniform) of solid particles among the processors. An excellent linear scalability is obtained for sample B that has more uniform distribution of solid particles leading to a superior load balancing. The model is then used to simulate fluid flow directly in REV size three-dimensional x-ray images of a naturally occurring sandstone. We analyze the quality and consistency of the predicted flow behavior and calculate absolute permeability, which compares well with the available network modeling and Lattice-Boltzmann permeabilities available in the literature for the same sandstone. We show that the model conserves mass very well and is stable computationally even at very narrow fluid conduits. The transient- and the steady-state fluid flow patterns are presented as well as the steady-state flow rates to compute absolute permeability. Furthermore, we discuss the vital role of our adaptive particle resolution scheme in preserving the original pore connectivity of the samples and their narrow channels through splitting and merging of fluid particles.
Hypervelocity Impact Test Facility: A gun for hire
NASA Technical Reports Server (NTRS)
Johnson, Calvin R.; Rose, M. F.; Hill, D. C.; Best, S.; Chaloupka, T.; Crawford, G.; Crumpler, M.; Stephens, B.
1994-01-01
An affordable technique has been developed to duplicate the types of impacts observed on spacecraft, including the Shuttle, by use of a certified Hypervelocity Impact Facility (HIF) which propels particulates using capacitor driven electric gun techniques. The fully operational facility provides a flux of particles in the 10-100 micron diameter range with a velocity distribution covering the space debris and interplanetary dust particle environment. HIF measurements of particle size, composition, impact angle and velocity distribution indicate that such parameters can be controlled in a specified, tailored test designed for or by the user. Unique diagnostics enable researchers to fully describe the impact for evaluating the 'targets' under full power or load. Users regularly evaluate space hardware, including solar cells, coatings, and materials, exposing selected portions of space-qualified items to a wide range of impact events and environmental conditions. Benefits include corroboration of data obtained from impact events, flight simulation of designs, accelerated aging of systems, and development of manufacturing techniques.
Vortex with fourfold defect lines in a simple model of self-propelled particles
NASA Astrophysics Data System (ADS)
Seyed-Allaei, Hamid; Ejtehadi, Mohammad Reza
2016-03-01
We study the formation of a vortex with fourfold symmetry in a minimal model of self-propelled particles, confined inside a squared box, using computer simulations and also theoretical analysis. In addition to the vortex pattern, we observe five other regimes in the system: a homogeneous gaseous phase, band structures, moving clumps, moving clusters, and vibrating rings. All six regimes emerge from controlling the strength of noise and from the contribution of repulsion and alignment interactions. We study the shape of the vortex and its symmetry in detail. The pattern shows exponential defect lines where incoming and outgoing flows of particles collide. We show that alignment and repulsion interactions between particles are necessary to form such patterns. We derive hydrodynamical equations with an introduction of the "small deviation" technique to describe the vortex phase. The method is applicable to other systems as well. Finally, we compare the theory with the results of both computer simulations and an experiment using Quincke rotors. A good agreement between the three is observed.
Ma, Baoshun; Ruwet, Vincent; Corieri, Patricia; Theunissen, Raf; Riethmuller, Michel; Darquenne, Chantal
2009-01-01
Accurate modeling of air flow and aerosol transport in the alveolated airways is essential for quantitative predictions of pulmonary aerosol deposition. However, experimental validation of such modeling studies has been scarce. The objective of this study is to validate CFD predictions of flow field and particle trajectory with experiments within a scaled-up model of alveolated airways. Steady flow (Re = 0.13) of silicone oil was captured by particle image velocimetry (PIV), and the trajectories of 0.5 mm and 1.2 mm spherical iron beads (representing 0.7 to 14.6 μm aerosol in vivo) were obtained by particle tracking velocimetry (PTV). At twelve selected cross sections, the velocity profiles obtained by CFD matched well with those by PIV (within 1.7% on average). The CFD predicted trajectories also matched well with PTV experiments. These results showed that air flow and aerosol transport in models of human alveolated airways can be simulated by CFD techniques with reasonable accuracy. PMID:20161301
Ma, Baoshun; Ruwet, Vincent; Corieri, Patricia; Theunissen, Raf; Riethmuller, Michel; Darquenne, Chantal
2009-05-01
Accurate modeling of air flow and aerosol transport in the alveolated airways is essential for quantitative predictions of pulmonary aerosol deposition. However, experimental validation of such modeling studies has been scarce. The objective of this study is to validate CFD predictions of flow field and particle trajectory with experiments within a scaled-up model of alveolated airways. Steady flow (Re = 0.13) of silicone oil was captured by particle image velocimetry (PIV), and the trajectories of 0.5 mm and 1.2 mm spherical iron beads (representing 0.7 to 14.6 mum aerosol in vivo) were obtained by particle tracking velocimetry (PTV). At twelve selected cross sections, the velocity profiles obtained by CFD matched well with those by PIV (within 1.7% on average). The CFD predicted trajectories also matched well with PTV experiments. These results showed that air flow and aerosol transport in models of human alveolated airways can be simulated by CFD techniques with reasonable accuracy.
Radiation-induced rotation of small celestial bodies
NASA Technical Reports Server (NTRS)
Misconi, N. Y.; Oliver, John; Mzariegos, Roberto
1992-01-01
The rotation was studied of particles in a simulated space environment via a technique known as Laser Particle Levitation. The combination of both a high vacuum and optical laser levitation to negate the effects of Earth's gravity, simulate the space environment. The rotation mechanism under study is known as the 'Windmill Effect,' which is a spin mechanism that suggests that the interaction of the photon field from a star with the surface irregularities of cosmic dust will cause them to spin due to the imbalance in the directionality of the scattered photons which necessitates a non-zero angular momentum. This conclusion is based on the random nature of the orientation of the sites of surface irregularities. The general object is to study the behavior of particles in orbits around the Earth, both natural and man made, as well as interplanetary and circumstellar particles. To meet this objective, an apparatus was constructed which was designed to allow optical levitation in a vacuum.
NASA Astrophysics Data System (ADS)
Ireland, Peter J.; Collins, Lance R.
2012-11-01
Turbulence-induced collision of inertial particles may contribute to the rapid onset of precipitation in warm cumulus clouds. The particle collision frequency is determined from two parameters: the radial distribution function g (r) and the mean inward radial relative velocity
Modeling target normal sheath acceleration using handoffs between multiple simulations
NASA Astrophysics Data System (ADS)
McMahon, Matthew; Willis, Christopher; Mitchell, Robert; King, Frank; Schumacher, Douglass; Akli, Kramer; Freeman, Richard
2013-10-01
We present a technique to model the target normal sheath acceleration (TNSA) process using full-scale LSP PIC simulations. The technique allows for a realistic laser, full size target and pre-plasma, and sufficient propagation length for the accelerated ions and electrons. A first simulation using a 2D Cartesian grid models the laser-plasma interaction (LPI) self-consistently and includes field ionization. Electrons accelerated by the laser are imported into a second simulation using a 2D cylindrical grid optimized for the initial TNSA process and incorporating an equation of state. Finally, all of the particles are imported to a third simulation optimized for the propagation of the accelerated ions and utilizing a static field solver for initialization. We also show use of 3D LPI simulations. Simulation results are compared to recent ion acceleration experiments using SCARLET laser at The Ohio State University. This work was performed with support from ASOFR under contract # FA9550-12-1-0341, DARPA, and allocations of computing time from the Ohio Supercomputing Center.
Transition from fractional to classical Stokes-Einstein behaviour in simple fluids.
Coglitore, Diego; Edwardson, Stuart P; Macko, Peter; Patterson, Eann A; Whelan, Maurice
2017-12-01
An optical technique for tracking single particles has been used to evaluate the particle diameter at which diffusion transitions from molecular behaviour described by the fractional Stokes-Einstein relationship to particle behaviour described by the classical Stokes-Einstein relationship. The results confirm a prior prediction from molecular dynamic simulations that there is a particle size at which transition occurs and show it is inversely dependent on concentration and viscosity but independent of particle density. For concentrations in the range 5 × 10 -3 to 5 × 10 -6 mg ml -1 and viscosities from 0.8 to 150 mPa s, the transition was found to occur in the diameter range 150-300 nm.
Recovering 3D particle size distributions from 2D sections
NASA Astrophysics Data System (ADS)
Cuzzi, Jeffrey N.; Olson, Daniel M.
2017-03-01
We discuss different ways to convert observed, apparent particle size distributions from 2D sections (thin sections, SEM maps on planar surfaces, etc.) into true 3D particle size distributions. We give a simple, flexible, and practical method to do this; show which of these techniques gives the most faithful conversions; and provide (online) short computer codes to calculate both 2D-3D recoveries and simulations of 2D observations by random sectioning. The most important systematic bias of 2D sectioning, from the standpoint of most chondrite studies, is an overestimate of the abundance of the larger particles. We show that fairly good recoveries can be achieved from observed size distributions containing 100-300 individual measurements of apparent particle diameter.
Advanced Accelerators: Particle, Photon and Plasma Wave Interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Ronald L.
2017-06-29
The overall objective of this project was to study the acceleration of electrons to very high energies over very short distances based on trapping slowly moving electrons in the fast moving potential wells of large amplitude plasma waves, which have relativistic phase velocities. These relativistic plasma waves, or wakefields, are the basis of table-top accelerators that have been shown to accelerate electrons to the same high energies as kilometer-length linear particle colliders operating using traditional decades-old acceleration techniques. The accelerating electrostatic fields of the relativistic plasma wave accelerators can be as large as GigaVolts/meter, and our goal was to studymore » techniques for remotely measuring these large fields by injecting low energy probe electron beams across the plasma wave and measuring the beam’s deflection. Our method of study was via computer simulations, and these results suggested that the deflection of the probe electron beam was directly proportional to the amplitude of the plasma wave. This is the basis of a proposed diagnostic technique, and numerous studies were performed to determine the effects of changing the electron beam, plasma wave and laser beam parameters. Further simulation studies included copropagating laser beams with the relativistic plasma waves. New interesting results came out of these studies including the prediction that very small scale electron beam bunching occurs, and an anomalous line focusing of the electron beam occurs under certain conditions. These studies were summarized in the dissertation of a graduate student who obtained the Ph.D. in physics. This past research program has motivated ideas for further research to corroborate these results using particle-in-cell simulation tools which will help design a test-of-concept experiment in our laboratory and a scaled up version for testing at a major wakefield accelerator facility.« less
Particle Acceleration in Pulsar Wind Nebulae: PIC Modelling
NASA Astrophysics Data System (ADS)
Sironi, Lorenzo; Cerutti, Benoît
We discuss the role of PIC simulations in unveiling the origin of the emitting particles in PWNe. After describing the basics of the PIC technique, we summarize its implications for the quiescent and the flaring emission of the Crab Nebula, as a prototype of PWNe. A consensus seems to be emerging that, in addition to the standard scenario of particle acceleration via the Fermi process at the termination shock of the pulsar wind, magnetic reconnection in the wind, at the termination shock and in the Nebula plays a major role in powering the multi-wavelength signatures of PWNe.
Study of Submicron Particle Size Distributions by Laser Doppler Measurement of Brownian Motion.
1984-10-29
o ..... . 5-1 A.S *6NEW DISCOVERIES OR INVENTIONS .. o......... ......... 6-1 APPENDIX: COMPUTER SIMULATION OF THE BROWNIAN MOTION SENSOR SIGNALS...scattering regime by analysis of the scattered light intensity and particle mass (size) obtained using the Brownian motion sensor . 9 Task V - By application...of the Brownian motion sensor in a flat-flame burner, the contractor shall assess the application of this technique for In-situ sizing of submicron
Retrieval of subvisual cirrus cloud optical thickness from limb-scatter measurements
NASA Astrophysics Data System (ADS)
Wiensz, J. T.; Degenstein, D. A.; Lloyd, N. D.; Bourassa, A. E.
2013-01-01
We present a technique for estimating the optical thickness of subvisual cirrus clouds detected by OSIRIS (Optical Spectrograph and Infrared Imaging System), a limb-viewing satellite instrument that measures scattered radiances from the UV to the near-IR. The measurement set is composed of a ratio of limb radiance profiles at two wavelengths that indicates the presence of cloud-scattering regions. Cross-sections and phase functions from an in situ database are used to simulate scattering by cloud-particles. With appropriate configurations discussed in this paper, the SASKTRAN successive-orders of scatter radiative transfer model is able to simulate accurately the in-cloud radiances from OSIRIS. Configured in this way, the model is used with a multiplicative algebraic reconstruction technique (MART) to retrieve the cloud extinction profile for an assumed effective cloud particle size. The sensitivity of these retrievals to key auxiliary model parameters is shown, and it is shown that the retrieved extinction profile, for an assumed effective cloud particle size, models well the measured in-cloud radiances from OSIRIS. The greatest sensitivity of the retrieved optical thickness is to the effective cloud particle size. Since OSIRIS has an 11-yr record of subvisual cirrus cloud detections, the work described in this manuscript provides a very useful method for providing a long-term global record of the properties of these clouds.
Plasma electron hole kinematics. II. Hole tracking Particle-In-Cell simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, C.; Hutchinson, I. H.
The kinematics of a 1-D electron hole is studied using a novel Particle-In-Cell simulation code. A hole tracking technique enables us to follow the trajectory of a fast-moving solitary hole and study quantitatively hole acceleration and coupling to ions. We observe a transient at the initial stage of hole formation when the hole accelerates to several times the cold-ion sound speed. Artificially imposing slow ion speed changes on a fully formed hole causes its velocity to change even when the ion stream speed in the hole frame greatly exceeds the ion thermal speed, so there are no reflected ions. Themore » behavior that we observe in numerical simulations agrees very well with our analytic theory of hole momentum conservation and the effects of “jetting.”.« less
Controlled release of anticancer drug methotrexate from biodegradable gelatin microspheres.
Narayani, R; Rao, K P
1994-01-01
Biodegradable hydrophilic gelatin microspheres containing the anticancer drug methotrexate (MTX) of different mean particle sizes (1-5, 5-10, and 15-20 microns) were prepared by polymer dispersion technique and crosslinked with glutaraldehyde. The microspheres were uniform, smooth, solid and in the form of free-flowing powder. About 80 per cent of MTX was incorporated in gelatin microspheres of different sizes. The in vitro release of MTX was investigated in two different media, namely simulated gastric and intestinal fluids. The release profiles indicated that gelatin microspheres released MTX in a zero-order fashion for 4-6 days in simulated gastric fluid and for 5-8 days in simulated intestinal fluid. The rate of release of MTX decreased with increase in the particle size of the microspheres. MTX release was faster in gastric fluid when compared to intestinal fluid.
Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.
Efficient volumetric estimation from plenoptic data
NASA Astrophysics Data System (ADS)
Anglin, Paul; Reeves, Stanley J.; Thurow, Brian S.
2013-03-01
The commercial release of the Lytro camera, and greater availability of plenoptic imaging systems in general, have given the image processing community cost-effective tools for light-field imaging. While this data is most commonly used to generate planar images at arbitrary focal depths, reconstruction of volumetric fields is also possible. Similarly, deconvolution is a technique that is conventionally used in planar image reconstruction, or deblurring, algorithms. However, when leveraged with the ability of a light-field camera to quickly reproduce multiple focal planes within an imaged volume, deconvolution offers a computationally efficient method of volumetric reconstruction. Related research has shown than light-field imaging systems in conjunction with tomographic reconstruction techniques are also capable of estimating the imaged volume and have been successfully applied to particle image velocimetry (PIV). However, while tomographic volumetric estimation through algorithms such as multiplicative algebraic reconstruction techniques (MART) have proven to be highly accurate, they are computationally intensive. In this paper, the reconstruction problem is shown to be solvable by deconvolution. Deconvolution offers significant improvement in computational efficiency through the use of fast Fourier transforms (FFTs) when compared to other tomographic methods. This work describes a deconvolution algorithm designed to reconstruct a 3-D particle field from simulated plenoptic data. A 3-D extension of existing 2-D FFT-based refocusing techniques is presented to further improve efficiency when computing object focal stacks and system point spread functions (PSF). Reconstruction artifacts are identified; their underlying source and methods of mitigation are explored where possible, and reconstructions of simulated particle fields are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Audrey Noreen
2006-01-01
Single Particle Aerosol Mass Spectrometry (SPAMS) was evaluated as a real-time detection technique for single particles of high explosives. Dual-polarity time-of-flight mass spectra were obtained for samples of 2,4,6-trinitrotoluene (TNT), 1,3,5-trinitro-1,3,5-triazinane (RDX), and pentaerythritol tetranitrate (PETN); peaks indicative of each compound were identified. Composite explosives, Comp B, Semtex 1A, and Semtex 1H were also analyzed, and peaks due to the explosive components of each sample were present in each spectrum. Mass spectral variability with laser fluence is discussed. The ability of the SPAMS system to identify explosive components in a single complex explosive particle (~1 pg) without the need formore » consumables is demonstrated. SPAMS was also applied to the detection of Chemical Warfare Agent (CWA) simulants in the liquid and vapor phases. Liquid simulants for sarin, cyclosarin, tabun, and VX were analyzed; peaks indicative of each simulant were identified. Vapor phase CWA simulants were adsorbed onto alumina, silica, Zeolite, activated carbon, and metal powders which were directly analyzed using SPAMS. The use of metal powders as adsorbent materials was especially useful in the analysis of triethyl phosphate (TEP), a VX stimulant, which was undetectable using SPAMS in the liquid phase. The capability of SPAMS to detect high explosives and CWA simulants using one set of operational conditions is established.« less
The many-body Wigner Monte Carlo method for time-dependent ab-initio quantum simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sellier, J.M., E-mail: jeanmichel.sellier@parallel.bas.bg; Dimov, I.
2014-09-15
The aim of ab-initio approaches is the simulation of many-body quantum systems from the first principles of quantum mechanics. These methods are traditionally based on the many-body Schrödinger equation which represents an incredible mathematical challenge. In this paper, we introduce the many-body Wigner Monte Carlo method in the context of distinguishable particles and in the absence of spin-dependent effects. Despite these restrictions, the method has several advantages. First of all, the Wigner formalism is intuitive, as it is based on the concept of a quasi-distribution function. Secondly, the Monte Carlo numerical approach allows scalability on parallel machines that is practicallymore » unachievable by means of other techniques based on finite difference or finite element methods. Finally, this method allows time-dependent ab-initio simulations of strongly correlated quantum systems. In order to validate our many-body Wigner Monte Carlo method, as a case study we simulate a relatively simple system consisting of two particles in several different situations. We first start from two non-interacting free Gaussian wave packets. We, then, proceed with the inclusion of an external potential barrier, and we conclude by simulating two entangled (i.e. correlated) particles. The results show how, in the case of negligible spin-dependent effects, the many-body Wigner Monte Carlo method provides an efficient and reliable tool to study the time-dependent evolution of quantum systems composed of distinguishable particles.« less
Numerical simulation of magnetic nanoparticles targeting in a bifurcation vessel
NASA Astrophysics Data System (ADS)
Larimi, M. M.; Ramiar, A.; Ranjbar, A. A.
2014-08-01
Guiding magnetic iron oxide nanoparticles with the help of an external magnetic field to its target is the principle behind the development of super paramagnetic iron oxide nanoparticles (SPIONs) as novel drug delivery vehicles. The present paper is devoted to study on MDT (Magnetic Drug Targeting) technique by particle tracking in the presence of magnetic field in a bifurcation vessel. The blood flow in bifurcation is considered incompressible, unsteady and Newtonian. The flow analysis applies the time dependent, two dimensional, incompressible Navier-Stokes equations for Newtonian fluids. The Lagrangian particle tracking is performed to estimate particle behavior under influence of imposed magnetic field gradients along the bifurcation. According to the results, the magnetic field increased the volume fraction of particle in target region, but in vessels with high Reynolds number, the efficiency of MDT technique is very low. Also the results showed that in the bifurcation vessels with lower angles, wall shear stress is higher and consequently the risk of the vessel wall rupture increases.
An integrated system for the online monitoring of particle therapy treatment accuracy
NASA Astrophysics Data System (ADS)
Fiorina, E.; INSIDE Collaboration
2016-07-01
Quality assurance in hadrontherapy remains an open issue that can be addressed with reliable monitoring of treatment accuracy. The INSIDE (INnovative SolutIons for DosimEtry in hadrontherapy) project aims to develop an integrated online monitoring system based on two dedicated PET panels and a tracking system, called Dose Profiler. The proposed solution is designed to operate in-beam and provide an immediate feedback on the particle range acquiring both photons produced by β+ decays and prompt secondary particle signals. Monte Carlo simulations cover an important role both in the system development, by confirming the design feasibility, and in the system operation, by understanding data. A FLUKA-based integrated simulation was developed taking into account the hadron beam structure, the phantom/patient features and the PET detector and Dose Profiler specifications. In addition, to reduce simulation time in signal generation on PET detectors, a two-step technique has been implemented and validated. The first PET modules were tested in May 2015 at the Centro Nazionale Adroterapia Oncologica (CNAO) in Pavia (Italy) with very satisfactory results: in-spill, inter-spill and post-treatment PET images were reconstructed and a quantitative agreement between data and simulation was found.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Brunner, Thomas A.; Gentile, Nicholas A.
2013-10-15
We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositionsmore » will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.« less
Surge Flow in a Centrifugal Compressor Measured by Digital Particle Image Velocimetry
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
2000-01-01
A planar optical velocity measurement technique known as Particle Image Velocimetry (PIV) is being used to study transient events in compressors. In PIV, a pulsed laser light sheet is used to record the positions of particles entrained in a fluid at two instances in time across a planar region of the flow. Determining the recorded particle displacement between exposures yields an instantaneous velocity vector map across the illuminated plane. Detailed flow mappings obtained using PIV in high-speed rotating turbomachinery components are used to improve the accuracy of computational fluid dynamics (CFD) simulations, which in turn, are used to guide advances in state-of-the-art aircraft engine hardware designs.
Thermal and Kinetic Modelling of Elastomer Flow—Application to an Extrusion Die
NASA Astrophysics Data System (ADS)
Launay, J.; Allanic, N.; Mousseau, P.; Deterre, R.
2011-05-01
This paper reports and discusses the thermal and kinetic behaviour of elastomer flow inside an extrusion die. The reaction progress through the runner was modeled by using a particle tracking technique. The aim is to analyze viscous dissipation phenomena to control scorch arisen, improve the rubber compound curing homogeneity and reduce the heating time in the mould using the progress of the induction time. The heat and momentum equations were solved in three dimensions with Ansys Polyflow. A particle tracking technique was set up to calculate the reaction progress. Several simulations were performed to highlight the influence of process parameters and geometry modifications on the rubber compound thermal and cure homogeneity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Guangye; Chacon, Luis; Knoll, Dana Alan
2015-07-31
A multi-rate PIC formulation was developed that employs large timesteps for slow field evolution, and small (adaptive) timesteps for particle orbit integrations. Implementation is based on a JFNK solver with nonlinear elimination and moment preconditioning. The approach is free of numerical instabilities (ω peΔt >>1, and Δx >> λ D), and requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant gains (vs. conventional explicit PIC) may be possible for large scale simulations. The paper is organized as follows: Vlasov-Maxwell Particle-in-cell (PIC) methods for plasmas; Explicit, semi-implicit, and implicit time integrations; Implicit PIC formulation (Jacobian-Free Newton-Krylovmore » (JFNK) with nonlinear elimination allows different treatments of disparate scales, discrete conservation properties (energy, charge, canonical momentum, etc.)); Some numerical examples; and Summary.« less
Hijazi, Bilal; Cool, Simon; Vangeyte, Jürgen; Mertens, Koen C; Cointault, Frédéric; Paindavoine, Michel; Pieters, Jan G
2014-11-13
A 3D imaging technique using a high speed binocular stereovision system was developed in combination with corresponding image processing algorithms for accurate determination of the parameters of particles leaving the spinning disks of centrifugal fertilizer spreaders. Validation of the stereo-matching algorithm using a virtual 3D stereovision simulator indicated an error of less than 2 pixels for 90% of the particles. The setup was validated using the cylindrical spread pattern of an experimental spreader. A 2D correlation coefficient of 90% and a Relative Error of 27% was found between the experimental results and the (simulated) spread pattern obtained with the developed setup. In combination with a ballistic flight model, the developed image acquisition and processing algorithms can enable fast determination and evaluation of the spread pattern which can be used as a tool for spreader design and precise machine calibration.
Dual-Mode Combustion of Hydrogen in a Mach 5, Continuous-Flow Facility
NASA Technical Reports Server (NTRS)
Goyne, C. P.; McDaniel, J. C.; Quagliaroli, T. M.; Krauss, R. H.; Day, S. W.; Reubush, D. E. (Technical Monitor); McClinton, C. R. (Technical Monitor); Reubush, D. E.
2001-01-01
Results of an experimental and numerical study of a dual-mode scramjet combustor are reported. The experiment consisted of a direct-connect test of a Mach 2 hydrogen-air combustor with a single unswept-ramp fuel injector. The flow stagnation enthalpy simulated a flight Mach number of 5. Measurements were obtained using conventional wall instrumentation and a particle-imaging laser diagnostic technique. The particle imaging was enabled through the development of a new apparatus for seeding fine silicon dioxide particles into the combustor fuel stream. Numerical simulations of the combustor were performed using the GASP code. The modeling, and much of the experimental work, focused on the supersonic combustion mode. Reasonable agreement was observed between experimental and numerical wall pressure distributions. However, the numerical model was unable to predict accurately the effects of combustion on the fuel plume size, penetration, shape, and axial growth.
Numerical Simulations of the Digital Microfluidic Manipulation of Single Microparticles.
Lan, Chuanjin; Pal, Souvik; Li, Zhen; Ma, Yanbao
2015-09-08
Single-cell analysis techniques have been developed as a valuable bioanalytical tool for elucidating cellular heterogeneity at genomic, proteomic, and cellular levels. Cell manipulation is an indispensable process for single-cell analysis. Digital microfluidics (DMF) is an important platform for conducting cell manipulation and single-cell analysis in a high-throughput fashion. However, the manipulation of single cells in DMF has not been quantitatively studied so far. In this article, we investigate the interaction of a single microparticle with a liquid droplet on a flat substrate using numerical simulations. The droplet is driven by capillary force generated from the wettability gradient of the substrate. Considering the Brownian motion of microparticles, we utilize many-body dissipative particle dynamics (MDPD), an off-lattice mesoscopic simulation technique, in this numerical study. The manipulation processes (including pickup, transport, and drop-off) of a single microparticle with a liquid droplet are simulated. Parametric studies are conducted to investigate the effects on the manipulation processes from the droplet size, wettability gradient, wetting properties of the microparticle, and particle-substrate friction coefficients. The numerical results show that the pickup, transport, and drop-off processes can be precisely controlled by these parameters. On the basis of the numerical results, a trap-free delivery of a hydrophobic microparticle to a destination on the substrate is demonstrated in the numerical simulations. The numerical results not only provide a fundamental understanding of interactions among the microparticle, the droplet, and the substrate but also demonstrate a new technique for the trap-free immobilization of single hydrophobic microparticles in the DMF design. Finally, our numerical method also provides a powerful design and optimization tool for the manipulation of microparticles in DMF systems.
Element fracture technique for hypervelocity impact simulation
NASA Astrophysics Data System (ADS)
Zhang, Xiao-tian; Li, Xiao-gang; Liu, Tao; Jia, Guang-hui
2015-05-01
Hypervelocity impact dynamics is the theoretical support of spacecraft shielding against space debris. The numerical simulation has become an important approach for obtaining the ballistic limits of the spacecraft shields. Currently, the most widely used algorithm for hypervelocity impact is the smoothed particle hydrodynamics (SPH). Although the finite element method (FEM) is widely used in fracture mechanics and low-velocity impacts, the standard FEM can hardly simulate the debris cloud generated by hypervelocity impact. This paper presents a successful application of the node-separation technique for hypervelocity impact debris cloud simulation. The node-separation technique assigns individual/coincident nodes for the adjacent elements, and it applies constraints to the coincident node sets in the modeling step. In the explicit iteration, the cracks are generated by releasing the constrained node sets that meet the fracture criterion. Additionally, the distorted elements are identified from two aspects - self-piercing and phase change - and are deleted so that the constitutive computation can continue. FEM with the node-separation technique is used for thin-wall hypervelocity impact simulations. The internal structures of the debris cloud in the simulation output are compared with that in the test X-ray graphs under different material fracture criteria. It shows that the pressure criterion is more appropriate for hypervelocity impact. The internal structures of the debris cloud are also simulated and compared under different thickness-to-diameter ratios (t/D). The simulation outputs show the same spall pattern with the tests. Finally, the triple-plate impact case is simulated with node-separation FEM.
Scalable nuclear density functional theory with Sky3D
NASA Astrophysics Data System (ADS)
Afibuzzaman, Md; Schuetrumpf, Bastian; Aktulga, Hasan Metin
2018-02-01
In nuclear astrophysics, quantum simulations of large inhomogeneous dense systems as they appear in the crusts of neutron stars present big challenges. The number of particles in a simulation with periodic boundary conditions is strongly limited due to the immense computational cost of the quantum methods. In this paper, we describe techniques for an efficient and scalable parallel implementation of Sky3D, a nuclear density functional theory solver that operates on an equidistant grid. Presented techniques allow Sky3D to achieve good scaling and high performance on a large number of cores, as demonstrated through detailed performance analysis on a Cray XC40 supercomputer.
Study on the tumor-induced angiogenesis using mathematical models.
Suzuki, Takashi; Minerva, Dhisa; Nishiyama, Koichi; Koshikawa, Naohiko; Chaplain, Mark Andrew Joseph
2018-01-01
We studied angiogenesis using mathematical models describing the dynamics of tip cells. We reviewed the basic ideas of angiogenesis models and its numerical simulation technique to produce realistic computer graphics images of sprouting angiogenesis. We examined the classical model of Anderson-Chaplain using fundamental concepts of mass transport and chemical reaction with ECM degradation included. We then constructed two types of numerical schemes, model-faithful and model-driven ones, where new techniques of numerical simulation are introduced, such as transient probability, particle velocity, and Boolean variables. © 2017 The Authors. Cancer Science published by John Wiley & Sons Australia, Ltd on behalf of Japanese Cancer Association.
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; ...
2016-04-19
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov–Poisson equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xingyu; Samulyak, Roman, E-mail: roman.samulyak@stonybrook.edu; Computational Science Initiative, Brookhaven National Laboratory, Upton, NY 11973
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
di Stasio, Stefano; Konstandopoulos, Athanasios G; Kostoglou, Margaritis
2002-03-01
The agglomeration kinetics of growing soot generated in a diffusion atmospheric flame are here studied in situ by light scattering technique to infer cluster morphology and size (fractal dimension D(f) and radius of gyration R(g)). SEM analysis is used as a standard reference to obtain primary particle size D(P) at different residence times. The number N(P) of primary particles per aggregate and the number concentration n(A) of clusters are evaluated on the basis of the measured angular patterns of the scattered light intensity. The major finding is that the kinetics of the coagulation process that yields to the formation of chain-like aggregates by soot primary particles (size 10 to 40 nm) can be described with a constant coagulation kernel beta(c,exp)=2.37x10(-9) cm3/s (coagulation constant tau(c) approximately = 0.28 ms). This result is in nice accord with the Smoluchowski coagulation equation in the free molecular regime, and, vice versa, it is in contrast with previous studies conducted by invasive (ex situ) techniques, which claimed the evidence in flames of coagulation rates much larger than the kinetic theory predictions. Thereafter, a number of numerical simulations is implemented to compare with the experimental results on primary particle growth rate and on the process of aggregate reshaping that is observed by light scattering at later residence times. The restructuring process is conjectured to occur, for not well understood reasons, as a direct consequence of the atomic rearrangement in the solid phase carbon due to the prolonged residence time within the flame. Thus, on one side, it is shown that the numerical simulations of primary size history compare well with the values of primary size from SEM experiment with a growth rate constant of primary diameter about 1 nm/s. On the other side, the evolution of aggregate morphology is found to be predictable by the numerical simulations when the onset of a first-order "thermal" restructuring mechanism is assumed to occur in the flame at about 20 ms residence time leading to aggregates with an asymptotic fractal dimension D(f,infinity) approximately = 2.5.
Fully Coupled Simulation of Lithium Ion Battery Cell Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trembacki, Bradley L.; Murthy, Jayathi Y.; Roberts, Scott Alan
Lithium-ion battery particle-scale (non-porous electrode) simulations applied to resolved electrode geometries predict localized phenomena and can lead to better informed decisions on electrode design and manufacturing. This work develops and implements a fully-coupled finite volume methodology for the simulation of the electrochemical equations in a lithium-ion battery cell. The model implementation is used to investigate 3D battery electrode architectures that offer potential energy density and power density improvements over traditional layer-by-layer particle bed battery geometries. Advancement of micro-scale additive manufacturing techniques has made it possible to fabricate these 3D electrode microarchitectures. A variety of 3D battery electrode geometries are simulatedmore » and compared across various battery discharge rates and length scales in order to quantify performance trends and investigate geometrical factors that improve battery performance. The energy density and power density of the 3D battery microstructures are compared in several ways, including a uniform surface area to volume ratio comparison as well as a comparison requiring a minimum manufacturable feature size. Significant performance improvements over traditional particle bed electrode designs are observed, and electrode microarchitectures derived from minimal surfaces are shown to be superior. A reduced-order volume-averaged porous electrode theory formulation for these unique 3D batteries is also developed, allowing simulations on the full-battery scale. Electrode concentration gradients are modeled using the diffusion length method, and results for plate and cylinder electrode geometries are compared to particle-scale simulation results. Additionally, effective diffusion lengths that minimize error with respect to particle-scale results for gyroid and Schwarz P electrode microstructures are determined.« less
NASA Astrophysics Data System (ADS)
Rossi, Francesco; Londrillo, Pasquale; Sgattoni, Andrea; Sinigardi, Stefano; Turchetti, Giorgio
2012-12-01
We present `jasmine', an implementation of a fully relativistic, 3D, electromagnetic Particle-In-Cell (PIC) code, capable of running simulations in various laser plasma acceleration regimes on Graphics-Processing-Units (GPUs) HPC clusters. Standard energy/charge preserving FDTD-based algorithms have been implemented using double precision and quadratic (or arbitrary sized) shape functions for the particle weighting. When porting a PIC scheme to the GPU architecture (or, in general, a shared memory environment), the particle-to-grid operations (e.g. the evaluation of the current density) require special care to avoid memory inconsistencies and conflicts. Here we present a robust implementation of this operation that is efficient for any number of particles per cell and particle shape function order. Our algorithm exploits the exposed GPU memory hierarchy and avoids the use of atomic operations, which can hurt performance especially when many particles lay on the same cell. We show the code multi-GPU scalability results and present a dynamic load-balancing algorithm. The code is written using a python-based C++ meta-programming technique which translates in a high level of modularity and allows for easy performance tuning and simple extension of the core algorithms to various simulation schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Haihan; Grassian, Vicki H.; Saraf, Laxmikant V.
2012-11-08
Airborne fly ash from coal combustion may represent a source of bioavailable iron (Fe) in the open ocean. However, few studies have been made focusing on Fe speciation and distribution in coal fly ash. In this study, chemical imaging of fly ash has been performed using a dual-beam FIB/SEM (focused ion beam/scanning electron microscope) system for a better understanding of how simulated atmospheric processing modify the morphology, chemical compositions and element distributions of individual particles. A novel approach has been applied for cross-sectioning of fly ash specimen with a FIB in order to explore element distribution within the interior ofmore » individual particles. Our results indicate that simulated atmospheric processing causes disintegration of aluminosilicate glass, a dominant material in fly ash particles. Aluminosilicate-phase Fe in the inner core of fly ash particles is more easily mobilized compared with oxide-phase Fe present as surface aggregates on fly ash spheres. Fe release behavior depends strongly on Fe speciation in aerosol particles. The approach for preparation of cross-sectioned specimen described here opens new opportunities for particle microanalysis, particular with respect to inorganic refractive materials like fly ash and mineral dust.« less
Simulation of granular and gas-solid flows using discrete element method
NASA Astrophysics Data System (ADS)
Boyalakuntla, Dhanunjay S.
2003-10-01
In recent years there has been increased research activity in the experimental and numerical study of gas-solid flows. Flows of this type have numerous applications in the energy, pharmaceuticals, and chemicals process industries. Typical applications include pulverized coal combustion, flow and heat transfer in bubbling and circulating fluidized beds, hopper and chute flows, pneumatic transport of pharmaceutical powders and pellets, and many more. The present work addresses the study of gas-solid flows using computational fluid dynamics (CFD) techniques and discrete element simulation methods (DES) combined. Many previous studies of coupled gas-solid flows have been performed assuming the solid phase as a continuum with averaged properties and treating the gas-solid flow as constituting of interpenetrating continua. Instead, in the present work, the gas phase flow is simulated using continuum theory and the solid phase flow is simulated using DES. DES treats each solid particle individually, thus accounting for its dynamics due to particle-particle interactions, particle-wall interactions as well as fluid drag and buoyancy. The present work involves developing efficient DES methods for dense granular flow and coupling this simulation to continuum simulations of the gas phase flow. Simulations have been performed to observe pure granular behavior in vibrating beds. Benchmark cases have been simulated and the results obtained match the published literature. The dimensionless acceleration amplitude and the bed height are the parameters governing bed behavior. Various interesting behaviors such as heaping, round and cusp surface standing waves, as well as kinks, have been observed for different values of the acceleration amplitude for a given bed height. Furthermore, binary granular mixtures (granular mixtures with two particle sizes) in a vibrated bed have also been studied. Gas-solid flow simulations have been performed to study fluidized beds. Benchmark 2D fluidized bed simulations have been performed and the results have been shown to satisfactorily compare with those published in the literature. A comprehensive study of the effect of drag correlations on the simulation of fluidized beds has been performed. It has been found that nearly all the drag correlations studied make similar predictions of global quantities such as the time-dependent pressure drop, bubbling frequency and growth. In conclusion, discrete element simulation has been successfully coupled to continuum gas-phase. Though all the results presented in the thesis are two-dimensional, the present implementation is completely three dimensional and can be used to study 3D fluidized beds to aid in better design and understanding. Other industrially important phenomena like particle coating, coal gasification etc., and applications in emerging areas such as nano-particle/fluid mixtures can also be studied through this type of simulation. (Abstract shortened by UMI.)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rucinski, A; Mancini-Terracciano, C; Paramatti, R
2016-06-15
Purpose: Development of strategies to monitor range uncertainties is necessary to improve treatment planning in Charged Particle Therapy (CPT) and fully exploit the advantages of ion beams. Our group developed (within the framework of the INSIDE project funded by the Italian research ministry) and is currently building a compact detector Dose Profiler (DP) able to backtrack charged secondary particles produced in the patient during the irradiation. Furthermore we are studying monitoring strategy exploiting charged secondary emission profiles to control the range of the ion beam. Methods: This contribution reports on the DP detector design and construction status. The detector consistsmore » of a charged secondary tracker composed of scintillating fiber layers and a LYSO calorimeter for particles energy measurement.The detector layout has been optimized using the FLUKA Monte Carlo (MC) simulation software. The simulation of a 220 MeV Carbon beam impinging on a PMMA target has been performed to study the detector response, exploiting previous secondary radiation measurements performed by our group. The emission profile of charged secondary particles was reconstructed backtracking the particles to their generation point to benchmark the DP performances. Results: The DP construction status, including the technological details will be presented. The feasibility of range monitoring with DP will be demonstrated by means of MC studies. The correlation of the charged secondary particles emission shape with the position of the Bragg peak (BP) will be shown, as well as the spatial resolution achievable on the BP position estimation (less than 3 mm) in the clinical like conditions. Conclusion: The simulation studies supported the feasibility of an accurate range monitoring technique exploiting the use of charged secondary fragments emitted during the particle therapy treatment. The DP experimental tests are foreseen in 2016, at CNAO particle therapy center in Pavia.« less
A particle finite element method for machining simulations
NASA Astrophysics Data System (ADS)
Sabel, Matthias; Sator, Christian; Müller, Ralf
2014-07-01
The particle finite element method (PFEM) appears to be a convenient technique for machining simulations, since the geometry and topology of the problem can undergo severe changes. In this work, a short outline of the PFEM-algorithm is given, which is followed by a detailed description of the involved operations. The -shape method, which is used to track the topology, is explained and tested by a simple example. Also the kinematics and a suitable finite element formulation are introduced. To validate the method simple settings without topological changes are considered and compared to the standard finite element method for large deformations. To examine the performance of the method, when dealing with separating material, a tensile loading is applied to a notched plate. This investigation includes a numerical analysis of the different meshing parameters, and the numerical convergence is studied. With regard to the cutting simulation it is found that only a sufficiently large number of particles (and thus a rather fine finite element discretisation) leads to converged results of process parameters, such as the cutting force.
Neutrino flux prediction at MiniBooNE
NASA Astrophysics Data System (ADS)
Aguilar-Arevalo, A. A.; Anderson, C. E.; Bazarko, A. O.; Brice, S. J.; Brown, B. C.; Bugel, L.; Cao, J.; Coney, L.; Conrad, J. M.; Cox, D. C.; Curioni, A.; Djurcic, Z.; Finley, D. A.; Fleming, B. T.; Ford, R.; Garcia, F. G.; Garvey, G. T.; Green, C.; Green, J. A.; Hart, T. L.; Hawker, E.; Imlay, R.; Johnson, R. A.; Karagiorgi, G.; Kasper, P.; Katori, T.; Kobilarcik, T.; Kourbanis, I.; Koutsoliotas, S.; Laird, E. M.; Linden, S. K.; Link, J. M.; Liu, Y.; Liu, Y.; Louis, W. C.; Mahn, K. B. M.; Marsh, W.; Martin, P. S.; McGregor, G.; Metcalf, W.; Meyers, P. D.; Mills, F.; Mills, G. B.; Monroe, J.; Moore, C. D.; Nelson, R. H.; Nguyen, V. T.; Nienaber, P.; Nowak, J. A.; Ouedraogo, S.; Patterson, R. B.; Perevalov, D.; Polly, C. C.; Prebys, E.; Raaf, J. L.; Ray, H.; Roe, B. P.; Russell, A. D.; Sandberg, V.; Schirato, R.; Schmitz, D.; Shaevitz, M. H.; Shoemaker, F. C.; Smith, D.; Soderberg, M.; Sorel, M.; Spentzouris, P.; Stancu, I.; Stefanski, R. J.; Sung, M.; Tanaka, H. A.; Tayloe, R.; Tzanov, M.; van de Water, R.; Wascko, M. O.; White, D. H.; Wilking, M. J.; Yang, H. J.; Zeller, G. P.; Zimmerman, E. D.
2009-04-01
The booster neutrino experiment (MiniBooNE) searches for νμ→νe oscillations using the O(1GeV) neutrino beam produced by the booster synchrotron at the Fermi National Accelerator Laboratory). The booster delivers protons with 8 GeV kinetic energy (8.89GeV/c momentum) to a beryllium target, producing neutrinos from the decay of secondary particles in the beam line. We describe the Monte Carlo simulation methods used to estimate the flux of neutrinos from the beam line incident on the MiniBooNE detector for both polarities of the focusing horn. The simulation uses the Geant4 framework for propagating particles, accounting for electromagnetic processes and hadronic interactions in the beam line materials, as well as the decay of particles. The absolute double differential cross sections of pion and kaon production in the simulation have been tuned to match external measurements, as have the hadronic cross sections for nucleons and pions. The statistical precision of the flux predictions is enhanced through reweighting and resampling techniques. Systematic errors in the flux estimation have been determined by varying parameters within their uncertainties, accounting for correlations where appropriate.
Particle-in-cell simulation of x-ray wakefield acceleration and betatron radiation in nanotubes
Zhang, Xiaomei; Tajima, Toshiki; Farinella, Deano; ...
2016-10-18
Though wakefield acceleration in crystal channels has been previously proposed, x-ray wakefield acceleration has only recently become a realistic possibility since the invention of the single-cycled optical laser compression technique. We investigate the acceleration due to a wakefield induced by a coherent, ultrashort x-ray pulse guided by a nanoscale channel inside a solid material. By two-dimensional particle-in-cell computer simulations, we show that an acceleration gradient of TeV/cm is attainable. This is about 3 orders of magnitude stronger than that of the conventional plasma-based wakefield accelerations, which implies the possibility of an extremely compact scheme to attain ultrahigh energies. In additionmore » to particle acceleration, this scheme can also induce the emission of high energy photons at ~O(10–100) MeV. Here, our simulations confirm such high energy photon emissions, which is in contrast with that induced by the optical laser driven wakefield scheme. In addition to this, the significantly improved emittance of the energetic electrons has been discussed.« less
DynamO: a free O(N) general event-driven molecular dynamics simulator.
Bannerman, M N; Sargant, R; Lue, L
2011-11-30
Molecular dynamics algorithms for systems of particles interacting through discrete or "hard" potentials are fundamentally different to the methods for continuous or "soft" potential systems. Although many software packages have been developed for continuous potential systems, software for discrete potential systems based on event-driven algorithms are relatively scarce and specialized. We present DynamO, a general event-driven simulation package, which displays the optimal O(N) asymptotic scaling of the computational cost with the number of particles N, rather than the O(N) scaling found in most standard algorithms. DynamO provides reference implementations of the best available event-driven algorithms. These techniques allow the rapid simulation of both complex and large (>10(6) particles) systems for long times. The performance of the program is benchmarked for elastic hard sphere systems, homogeneous cooling and sheared inelastic hard spheres, and equilibrium Lennard-Jones fluids. This software and its documentation are distributed under the GNU General Public license and can be freely downloaded from http://marcusbannerman.co.uk/dynamo. Copyright © 2011 Wiley Periodicals, Inc.
Relationship between saccadic eye movements and formation of the Krukenberg's spindle-a CFD study.
Boushehrian, Hamidreza Hajiani; Abouali, Omid; Jafarpur, Khosrow; Ghaffarieh, Alireza; Ahmadi, Goodarz
2017-09-01
In this research, a series of numerical simulations for evaluating the effects of saccadic eye movement on the aqueous humour (AH) flow field and movement of pigment particles in the anterior chamber (AC) was performed. To predict the flow field of AH in the AC, the unsteady forms of continuity, momentum balance and conservation of energy equations were solved using the dynamic mesh technique for simulating the saccadic motions. Different orientations of the human eye including horizontal, vertical and angles of 10° and 20° were considered. The Lagrangian particle trajectory analysis approach was used to find the trajectories of pigment particles in the eye. Particular attention was given to the relation between the saccadic eye movement and potential formation of Krukenberg's spindle in the eye. The simulation results revealed that the natural convection flow was an effective mechanism for transferring pigment particles from the iris to near the cornea. In addition, the saccadic eye movement was the dominant mechanism for deposition of pigment particles on the cornea, which could lead to the formation of Krukenberg's spindle. The effect of amplitude of saccade motion angle in addition to the orientation of the eye on the formation of Krukenberg's spindle was investigated. © The authors 2016. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
A deep learning-based reconstruction of cosmic ray-induced air showers
NASA Astrophysics Data System (ADS)
Erdmann, M.; Glombitza, J.; Walz, D.
2018-01-01
We describe a method of reconstructing air showers induced by cosmic rays using deep learning techniques. We simulate an observatory consisting of ground-based particle detectors with fixed locations on a regular grid. The detector's responses to traversing shower particles are signal amplitudes as a function of time, which provide information on transverse and longitudinal shower properties. In order to take advantage of convolutional network techniques specialized in local pattern recognition, we convert all information to the image-like grid of the detectors. In this way, multiple features, such as arrival times of the first particles and optimized characterizations of time traces, are processed by the network. The reconstruction quality of the cosmic ray arrival direction turns out to be competitive with an analytic reconstruction algorithm. The reconstructed shower direction, energy and shower depth show the expected improvement in resolution for higher cosmic ray energy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, William Michael; Plimpton, Steven James; Wang, Peng
2010-03-01
LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale. LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. The code is designed to be easy to modify or extend with new functionality.
An incompressible two-dimensional multiphase particle-in-cell model for dense particle flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snider, D.M.; O`Rourke, P.J.; Andrews, M.J.
1997-06-01
A two-dimensional, incompressible, multiphase particle-in-cell (MP-PIC) method is presented for dense particle flows. The numerical technique solves the governing equations of the fluid phase using a continuum model and those of the particle phase using a Lagrangian model. Difficulties associated with calculating interparticle interactions for dense particle flows with volume fractions above 5% have been eliminated by mapping particle properties to a Eulerian grid and then mapping back computed stress tensors to particle positions. This approach utilizes the best of Eulerian/Eulerian continuum models and Eulerian/Lagrangian discrete models. The solution scheme allows for distributions of types, sizes, and density of particles,more » with no numerical diffusion from the Lagrangian particle calculations. The computational method is implicit with respect to pressure, velocity, and volume fraction in the continuum solution thus avoiding courant limits on computational time advancement. MP-PIC simulations are compared with one-dimensional problems that have analytical solutions and with two-dimensional problems for which there are experimental data.« less
Reconstructing particle masses in events with displaced vertices
NASA Astrophysics Data System (ADS)
Cottin, Giovanna
2018-03-01
We propose a simple way to extract particle masses given a displaced vertex signature in event topologies where two long-lived mother particles decay to visible particles and an invisible daughter. The mother could be either charged or neutral and the neutral daughter could correspond to a dark matter particle in different models. The method allows to extract the parent and daughter masses by using on-shell conditions and energy-momentum conservation, in addition to the displaced decay positions of the parents, which allows to solve the kinematic equations fully on an event-by-event basis. We show the validity of the method by means of simulations including detector effects. If displaced events are seen in discovery searches at the Large Hadron Collider (LHC), this technique can be applied.
A Large number of fast cosmological simulations
NASA Astrophysics Data System (ADS)
Koda, Jun; Kazin, E.; Blake, C.
2014-01-01
Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.
NASA Astrophysics Data System (ADS)
Kim, Evelina B.
Experimentally, liquid crystals (LC) can be used as the basis for optical biomolecular sensors that rely on LC ordering. Recently, the use of LC as a reporting medium has been extended to investigations of molecular scale processes at lipid laden aqueous-LC interfaces and at biological cell membranes. In this thesis, we present two related studies where liquid crystals are modelled at different length scales. We examine (a) the behavior of nanoscopic colloidal particles in LC systems, using Monte Carlo (MC) molecular simulations and a mesoscopic dynamic field theory (DyFT); and (b) specific interactions of two types of mesogens with a model phospholipid bilayer, using atomistic molecular dynamics (MD) at the A-nm scale. In (a), we consider colloidal particles suspended in a LC, confined between two walls. We calculate the colloid-substrate and colloid-colloid potentials of mean force (PMF). For the MC simulations, we developed a new technique (ExEDOS or Expanded Ensemble Density Of States) that ensures good sampling of phase space without prior knowledge of the energy landscape of the system. Both results, simulation and DyFT, indicate a repulsive force acting between a colloid and a wall. In contrast, both techniques indicate an overall colloid-colloid attraction and predict a new topology of the disclination lines that arises when the particles approach each other. In (b), we find that mesogens (pentylcyanobiphenyl [5CB] or difluorophenyl-pentylbicyclohexyl [5CF]) preferentially partition from the aqueous phase into a dipalmitoylphosphatidylcholine (DPPC) bilayer. We find highly favorable free energy differences for partitioning (-18kBT for 5CB, -26k BT for 5CF). We also simulated fully hydrated bilayers with embedded 5CB or 5CF at concentrations used in recent experiments (6 mol% and 20 mol%). The presence of mesogens in the bilayer enhances the order of lipid acyl tails and changes the spatial and orientational arrangement of lipid headgroup atoms. A stronger spatial correlation and larger ranges of molecular orientations and positions are observed for 5CB molecules compared to 5CF. At the same time, 5CF molecules were found to bind more strongly to lipid headgroups, thereby slowing the lateral motion of lipid molecules.
NASA Astrophysics Data System (ADS)
Goel, V.; Mishra, S.; Ahlawat, A. S.; Sharma, C.; Kotnala, R. K.
2017-12-01
Aerosol particles are generally considered as chemically homogeneous spheres in the retrieval techniques of ground and space borne observations which is not accurate approach and can lead to erroneous observations. For better simulation of optical and radiative properties of aerosols, a good knowledge of aerosol's morphology, chemical composition and internal structure is essential. Till date, many studies have reported the morphology and chemical composition of particles but very few of them provide internal structure and spatial distribution of different chemical species within the particle. The research on the effect of particle internal structure and its contribution to particle optics is extremely limited. In present work, we characterize the PM10 particles collected form typical arid (the Thar Desert, Rajasthan, India) and typical urban (New Delhi, India) environment using microscopic techniques. The particles were milled several times to investigate their internal structure. The EDS (Energy Dispersive X-ray Spectroscopy) spectra were recorded after each milling to check the variation in the chemical composition. In arid environment, Fe, Ca, C, Al, and Mg rich shell was observed over a Si rich particle whereas in urban environment, shell of Hg, Ag, C and N was observed over a Cu rich particle. Based on the observations, different model shapes [homogenous sphere and spheroid; heterogeneous sphere and spheroid; core shell] have been considered for assessing the associated uncertainties with the routine modeling of optical properties where volume equivalent homogeneous sphere approximation is considered. The details will be discussed during presentation.
Development of metamodels for predicting aerosol dispersion in ventilated spaces
NASA Astrophysics Data System (ADS)
Hoque, Shamia; Farouk, Bakhtier; Haas, Charles N.
2011-04-01
Artificial neural network (ANN) based metamodels were developed to describe the relationship between the design variables and their effects on the dispersion of aerosols in a ventilated space. A Hammersley sequence sampling (HSS) technique was employed to efficiently explore the multi-parameter design space and to build numerical simulation scenarios. A detailed computational fluid dynamics (CFD) model was applied to simulate these scenarios. The results derived from the CFD simulations were used to train and test the metamodels. Feed forward ANN's were developed to map the relationship between the inputs and the outputs. The predictive ability of the neural network based metamodels was compared to linear and quadratic metamodels also derived from the same CFD simulation results. The ANN based metamodel performed well in predicting the independent data sets including data generated at the boundaries. Sensitivity analysis showed that particle tracking time to residence time and the location of input and output with relation to the height of the room had more impact than the other dimensionless groups on particle behavior.
Kinetic Simulation and Energetic Neutral Atom Imaging of the Magnetosphere
NASA Technical Reports Server (NTRS)
Fok, Mei-Ching H.
2011-01-01
Advanced simulation tools and measurement techniques have been developed to study the dynamic magnetosphere and its response to drivers in the solar wind. The Comprehensive Ring Current Model (CRCM) is a kinetic code that solves the 3D distribution in space, energy and pitch-angle information of energetic ions and electrons. Energetic Neutral Atom (ENA) imagers have been carried in past and current satellite missions. Global morphology of energetic ions were revealed by the observed ENA images. We have combined simulation and ENA analysis techniques to study the development of ring current ions during magnetic storms and substorms. We identify the timing and location of particle injection and loss. We examine the evolution of ion energy and pitch-angle distribution during different phases of a storm. In this talk we will discuss the findings from our ring current studies and how our simulation and ENA analysis tools can be applied to the upcoming TRIO-CINAMA mission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richards, H.L.; Sides, S.W.; Novotny, M.A.
1996-12-31
Recently experimental techniques, such as magnetic force microscopy (MFM), have enabled the magnetic state of individual sub-micron particles to be resolved. Motivated by these experimental developments, the authors use Monte Carlo simulations of two-dimensional kinetic Ising ferromagnets to study the magnetic relaxation in a negative applied field of a grain with an initial magnetization m{sub 0} = + 1. They use classical droplet theory to predict the functional forms for some quantities which can be observed by MFM. An example is the probability that the magnetization is positive, which is a function of time, field, grain size, and grain dimensionality.more » The qualitative agreement between experiments and their simulations of switching in individual single-domain ferromagnets indicates that the switching mechanism in such particles may involve local nucleation and subsequent growth of droplets of the stable phase.« less
Nguyen, Phong Thanh; Abbosh, Amin; Crozier, Stuart
2017-06-01
In this paper, a technique for noninvasive microwave hyperthermia treatment for breast cancer is presented. In the proposed technique, microwave hyperthermia of patient-specific breast models is implemented using a three-dimensional (3-D) antenna array based on differential beam-steering subarrays to locally raise the temperature of the tumor to therapeutic values while keeping healthy tissue at normal body temperature. This approach is realized by optimizing the excitations (phases and amplitudes) of the antenna elements using the global optimization method particle swarm optimization. The antennae excitation phases are optimized to maximize the power at the tumor, whereas the amplitudes are optimized to accomplish the required temperature at the tumor. During the optimization, the technique ensures that no hotspots exist in healthy tissue. To implement the technique, a combination of linked electromagnetic and thermal analyses using MATLAB and the full-wave electromagnetic simulator is conducted. The technique is tested at 4.2 GHz, which is a compromise between the required power penetration and focusing, in a realistic simulation environment, which is built using a 3-D antenna array of 4 × 6 unidirectional antenna elements. The presented results on very dense 3-D breast models, which have the realistic dielectric and thermal properties, validate the capability of the proposed technique in focusing power at the exact location and volume of tumor even in the challenging cases where tumors are embedded in glands. Moreover, the models indicate the capability of the technique in dealing with tumors at different on- and off-axis locations within the breast with high efficiency in using the microwave power.
Combining neural networks and signed particles to simulate quantum systems more efficiently
NASA Astrophysics Data System (ADS)
Sellier, Jean Michel
2018-04-01
Recently a new formulation of quantum mechanics has been suggested which describes systems by means of ensembles of classical particles provided with a sign. This novel approach mainly consists of two steps: the computation of the Wigner kernel, a multi-dimensional function describing the effects of the potential over the system, and the field-less evolution of the particles which eventually create new signed particles in the process. Although this method has proved to be extremely advantageous in terms of computational resources - as a matter of fact it is able to simulate in a time-dependent fashion many-body systems on relatively small machines - the Wigner kernel can represent the bottleneck of simulations of certain systems. Moreover, storing the kernel can be another issue as the amount of memory needed is cursed by the dimensionality of the system. In this work, we introduce a new technique which drastically reduces the computation time and memory requirement to simulate time-dependent quantum systems which is based on the use of an appropriately tailored neural network combined with the signed particle formalism. In particular, the suggested neural network is able to compute efficiently and reliably the Wigner kernel without any training as its entire set of weights and biases is specified by analytical formulas. As a consequence, the amount of memory for quantum simulations radically drops since the kernel does not need to be stored anymore as it is now computed by the neural network itself, only on the cells of the (discretized) phase-space which are occupied by particles. As its is clearly shown in the final part of this paper, not only this novel approach drastically reduces the computational time, it also remains accurate. The author believes this work opens the way towards effective design of quantum devices, with incredible practical implications.
Sparse grid techniques for particle-in-cell schemes
NASA Astrophysics Data System (ADS)
Ricketson, L. F.; Cerfon, A. J.
2017-02-01
We propose the use of sparse grids to accelerate particle-in-cell (PIC) schemes. By using the so-called ‘combination technique’ from the sparse grids literature, we are able to dramatically increase the size of the spatial cells in multi-dimensional PIC schemes while paying only a slight penalty in grid-based error. The resulting increase in cell size allows us to reduce the statistical noise in the simulation without increasing total particle number. We present initial proof-of-principle results from test cases in two and three dimensions that demonstrate the new scheme’s efficiency, both in terms of computation time and memory usage.
NASA Astrophysics Data System (ADS)
Wang, Li; Li, Feng; Xing, Jian
2017-10-01
In this paper, a hybrid artificial bee colony (ABC) algorithm and pattern search (PS) method is proposed and applied for recovery of particle size distribution (PSD) from spectral extinction data. To be more useful and practical, size distribution function is modelled as the general Johnson's ? function that can overcome the difficulty of not knowing the exact type beforehand encountered in many real circumstances. The proposed hybrid algorithm is evaluated through simulated examples involving unimodal, bimodal and trimodal PSDs with different widths and mean particle diameters. For comparison, all examples are additionally validated by the single ABC algorithm. In addition, the performance of the proposed algorithm is further tested by actual extinction measurements with real standard polystyrene samples immersed in water. Simulation and experimental results illustrate that the hybrid algorithm can be used as an effective technique to retrieve the PSDs with high reliability and accuracy. Compared with the single ABC algorithm, our proposed algorithm can produce more accurate and robust inversion results while taking almost comparative CPU time over ABC algorithm alone. The superiority of ABC and PS hybridization strategy in terms of reaching a better balance of estimation accuracy and computation effort increases its potentials as an excellent inversion technique for reliable and efficient actual measurement of PSD.
NASA Astrophysics Data System (ADS)
Cetinbas, Firat C.; Ahluwalia, Rajesh K.; Kariuki, Nancy; De Andrade, Vincent; Fongalland, Dash; Smith, Linda; Sharman, Jonathan; Ferreira, Paulo; Rasouli, Somaye; Myers, Deborah J.
2017-03-01
The cost and performance of proton exchange membrane fuel cells strongly depend on the cathode electrode due to usage of expensive platinum (Pt) group metal catalyst and sluggish reaction kinetics. Development of low Pt content high performance cathodes requires comprehensive understanding of the electrode microstructure. In this study, a new approach is presented to characterize the detailed cathode electrode microstructure from nm to μm length scales by combining information from different experimental techniques. In this context, nano-scale X-ray computed tomography (nano-CT) is performed to extract the secondary pore space of the electrode. Transmission electron microscopy (TEM) is employed to determine primary C particle and Pt particle size distributions. X-ray scattering, with its ability to provide size distributions of orders of magnitude more particles than TEM, is used to confirm the TEM-determined size distributions. The number of primary pores that cannot be resolved by nano-CT is approximated using mercury intrusion porosimetry. An algorithm is developed to incorporate all these experimental data in one geometric representation. Upon validation of pore size distribution against gas adsorption and mercury intrusion porosimetry data, reconstructed ionomer size distribution is reported. In addition, transport related characteristics and effective properties are computed by performing simulations on the hybrid microstructure.
A particle-particle collision strategy for arbitrarily shaped particles at low Stokes numbers
NASA Astrophysics Data System (ADS)
Daghooghi, Mohsen; Borazjani, Iman
2016-11-01
We present a collision strategy for particles with any general shape at low Stokes numbers. Conventional collision strategies rely upon a short -range repulsion force along particles centerline, which is a suitable choice for spherical particles and may not work for complex-shaped particles. In the present method, upon the collision of two particles, kinematics of particles are modified so that particles have zero relative velocity toward each other along the direction in which they have the minimum distance. The advantage of this novel technique is that it guaranties to prevent particles from overlapping without unrealistic bounce back at low Stokes numbers, which may occur if repulsive forces are used. This model is used to simulate sedimentation of many particles in a vertical channel and suspensions of non-spherical particles under simple shear flow. This work was supported by the American Chemical Society (ACS) Petroleum Research Fund (PRF) Grant Number 53099-DNI9. The computational resources were partly provided by the Center for Computational Research (CCR) at the University at Buffalo.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwon, Kyung; Fan, Liang-Shih; Zhou, Qiang
A new and efficient direct numerical method with second-order convergence accuracy was developed for fully resolved simulations of incompressible viscous flows laden with rigid particles. The method combines the state-of-the-art immersed boundary method (IBM), the multi-direct forcing method, and the lattice Boltzmann method (LBM). First, the multi-direct forcing method is adopted in the improved IBM to better approximate the no-slip/no-penetration (ns/np) condition on the surface of particles. Second, a slight retraction of the Lagrangian grid from the surface towards the interior of particles with a fraction of the Eulerian grid spacing helps increase the convergence accuracy of the method. Anmore » over-relaxation technique in the procedure of multi-direct forcing method and the classical fourth order Runge-Kutta scheme in the coupled fluid-particle interaction were applied. The use of the classical fourth order Runge-Kutta scheme helps the overall IB-LBM achieve the second order accuracy and provides more accurate predictions of the translational and rotational motion of particles. The preexistent code with the first-order convergence rate is updated so that the updated new code can resolve the translational and rotational motion of particles with the second-order convergence rate. The updated code has been validated with several benchmark applications. The efficiency of IBM and thus the efficiency of IB-LBM were improved by reducing the number of the Lagragian markers on particles by using a new formula for the number of Lagrangian markers on particle surfaces. The immersed boundary-lattice Boltzmann method (IBLBM) has been shown to predict correctly the angular velocity of a particle. Prior to examining drag force exerted on a cluster of particles, the updated IB-LBM code along with the new formula for the number of Lagrangian markers has been further validated by solving several theoretical problems. Moreover, the unsteadiness of the drag force is examined when a fluid is accelerated from rest by a constant average pressure gradient toward a steady Stokes flow. The simulation results agree well with the theories for the short- and long-time behavior of the drag force. Flows through non-rotational and rotational spheres in simple cubic arrays and random arrays are simulated over the entire range of packing fractions, and both low and moderate particle Reynolds numbers to compare the simulated results with the literature results and develop a new drag force formula, a new lift force formula, and a new torque formula. Random arrays of solid particles in fluids are generated with Monte Carlo procedure and Zinchenko's method to avoid crystallization of solid particles over high solid volume fractions. A new drag force formula was developed with extensive simulated results to be closely applicable to real processes over the entire range of packing fractions and both low and moderate particle Reynolds numbers. The simulation results indicate that the drag force is barely affected by rotational Reynolds numbers. Drag force is basically unchanged as the angle of the rotating axis varies.« less
Inference of Ice Cloud Properties from High-spectral Resolution Infrared Observations. Appendix 4
NASA Technical Reports Server (NTRS)
Huang, Hung-Lung; Yang, Ping; Wei, Heli; Baum, Bryan A.; Hu, Yongxiang; Antonelli, Paolo; Ackerman, Steven A.
2005-01-01
The theoretical basis is explored for inferring the microphysical properties of ice crystal from high-spectral resolution infrared observations. A radiative transfer model is employed to simulate spectral radiances to address relevant issues. The extinction and absorption efficiencies of individual ice crystals, assumed as hexagonal columns for large particles and droxtals for small particles, are computed from a combination of the finite- difference time-domain (FDTD) technique and a composite method. The corresponding phase functions are computed from a combination of FDTD and an improved geometric optics method (IGOM). Bulk scattering properties are derived by averaging the single- scattering properties of individual particles for 30 particle size distributions developed from in situ measurements and for additional four analytical Gamma size distributions for small particles. The non-sphericity of ice crystals is shown to have a significant impact on the radiative signatures in the infrared (IR) spectrum; the spherical particle approximation for inferring ice cloud properties may result in an overest&ation of the optical thickness and an inaccurate retrieval of effective particle size. Furthermore, we show that the error associated with the use of the Henyey-Greenstein phase function can be as larger as 1 K in terms of brightness temperature for larger particle effective size at some strong scattering wavenumbers. For small particles, the difference between the two phase functions is much less, with brightness temperatures generally differing by less than 0.4 K. The simulations undertaken in this study show that the slope of the IR brightness temperature spectrum between 790-960/cm is sensitive to the effective particle size. Furthermore, a strong sensitivity of IR brightness temperature to cloud optical thickness is noted within the l050-1250/cm region. Based on this spectral feature, a technique is presented for the simultaneous retrieval of the visible optical thickness and effective particle size from high spectral resolution infrared data under ice cloudy con&tion. The error analysis shows that the uncertainty of the retrieved optical thickness and effective particle size has a small range of variation. The error for retrieving particle size in conjunction with an uncertainty of 5 K in cloud'temperature, or a surface temperature uncertainty of 2.5 K, is less than 15%. The corresponding e m r in the uncertainty of optical thickness is within 5-2096, depending on the value of cloud optical thickness. The applicability of the technique is demonstrated using the aircraft-based High- resolution Interferometer Sounder (HIS) data from the Subsonic Aircraft: Contrail and Cloud Effects Special Study (SUCCESS) in 1996 and the First ISCCP Regional Experiment - Arctic Clouds Experiment (FIRE-ACE) in 1998.
NASA Technical Reports Server (NTRS)
Liechty, Derek S.; Burt, Jonathan M.
2016-01-01
There are many flows fields that span a wide range of length scales where regions of both rarefied and continuum flow exist and neither direct simulation Monte Carlo (DSMC) nor computational fluid dynamics (CFD) provide the appropriate solution everywhere. Recently, a new viscous collision limited (VCL) DSMC technique was proposed to incorporate effects of physical diffusion into collision limiter calculations to make the low Knudsen number regime normally limited to CFD more tractable for an all-particle technique. This original work had been derived for a single species gas. The current work extends the VCL-DSMC technique to gases with multiple species. Similar derivations were performed to equate numerical and physical transport coefficients. However, a more rigorous treatment of determining the mixture viscosity is applied. In the original work, consideration was given to internal energy non-equilibrium, and this is also extended in the current work to chemical non-equilibrium.
RT DDA: A hybrid method for predicting the scattering properties by densely packed media
NASA Astrophysics Data System (ADS)
Ramezan Pour, B.; Mackowski, D.
2017-12-01
The most accurate approaches to predicting the scattering properties of particulate media are based on exact solutions of the Maxwell's equations (MEs), such as the T-matrix and discrete dipole methods. Applying these techniques for optically thick targets is challenging problem due to the large-scale computations and are usually substituted by phenomenological radiative transfer (RT) methods. On the other hand, the RT technique is of questionable validity in media with large particle packing densities. In recent works, we used numerically exact ME solvers to examine the effects of particle concentration on the polarized reflection properties of plane parallel random media. The simulations were performed for plane parallel layers of wavelength-sized spherical particles, and results were compared with RT predictions. We have shown that RTE results monotonically converge to the exact solution as the particle volume fraction becomes smaller and one can observe a nearly perfect fit for packing densities of 2%-5%. This study describes the hybrid technique composed of exact and numerical scalar RT methods. The exact methodology in this work is the plane parallel discrete dipole approximation whereas the numerical method is based on the adding and doubling method. This approach not only decreases the computational time owing to the RT method but also includes the interference and multiple scattering effects, so it may be applicable to large particle density conditions.
Passive Microfluidic device for Sub Millisecond Mixing
McMahon, Jay; Mohamed, Hisham; Barnard, David; Shaikh, Tanvir R.; Mannella, Carmen A.; Wagenknecht, Terence; Lu, Toh-Ming
2009-01-01
We report the investigation of a novel microfluidic mixing device to achieve submillisecond mixing. The micromixer combines two fluid streams of several microliters per second into a mixing compartment integrated with two T- type premixers and 4 butterfly-shaped in-channel mixing elements. We have employed three dimensional fluidic simulations to evaluate the mixing efficiency, and have constructed physical devices utilizing conventional microfabrication techniques. The simulation indicated thorough mixing at flow rate as low as 6 µL/s. The corresponding mean residence time is 0.44 ms for 90% of the particles simulated, or 0.49 ms for 95% of the particles simulated, respectively. The mixing efficiency of the physical device was also evaluated using fluorescein dye solutions and FluoSphere-red nanoparticles suspensions. The constructed micromixers achieved thorough mixing at the same flow rate of 6 µL/s, with the mixing indices of 96% ± 1%, and 98% ± 1% for the dye and the nanoparticle, respectively. The experimental results are consistent with the simulation data. The device demonstrated promising capabilities for time resolved studies for macromolecular dynamics of biological macromolecules. PMID:20161619
Modeling of Water Injection into a Vacuum
NASA Technical Reports Server (NTRS)
Alred, John W.; Smith, Nicole L.; Wang, K. C.; Lumpkin, Forrest E.; Fitzgerald, Steven M.
1997-01-01
A loosely coupled two-phase vacuum water plume model has been developed. This model consists of a source flow model to describe the expansion of water vapor, and the Lagrangian equations of motion for particle trajectories. Gas/Particle interaction is modeled through the drag force induced by the relative velocities. Particles are assumed traveling along streamlines. The equations of motion are integrated to obtain particle velocity along the streamline. This model has been used to predict the mass flux in a 5 meter radius hemispherical domain resulting from the burst of a water jet of 1.5 mm in diameter, mass flow rate of 24.2 g/s, and stagnation pressure of 21.0 psia, which is the nominal Orbiter water dump condition. The result is compared with an empirical water plume model deduced from a video image of the STS-29 water dump. To further improve the model, work has begun to numerically simulate the bubble formation and bursting present in a liquid stream injected into a vacuum. The technique of smoothed particle hydrodynamics was used to formulate this simulation. A status and results of the on-going effort are presented and compared to results from the literature.
Aerosol simulation including chemical and nuclear reactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marwil, E.S.; Lemmon, E.C.
1985-01-01
The numerical simulation of aerosol transport, including the effects of chemical and nuclear reactions presents a challenging dynamic accounting problem. Particles of different sizes agglomerate and settle out due to various mechanisms, such as diffusion, diffusiophoresis, thermophoresis, gravitational settling, turbulent acceleration, and centrifugal acceleration. Particles also change size, due to the condensation and evaporation of materials on the particle. Heterogeneous chemical reactions occur at the interface between a particle and the suspending medium, or a surface and the gas in the aerosol. Homogeneous chemical reactions occur within the aersol suspending medium, within a particle, and on a surface. These reactionsmore » may include a phase change. Nuclear reactions occur in all locations. These spontaneous transmutations from one element form to another occur at greatly varying rates and may result in phase or chemical changes which complicate the accounting process. This paper presents an approach for inclusion of these effects on the transport of aerosols. The accounting system is very complex and results in a large set of stiff ordinary differential equations (ODEs). The techniques for numerical solution of these ODEs require special attention to achieve their solution in an efficient and affordable manner. 4 refs.« less
Numerical simulation of magnetic nano drug targeting in patient-specific lower respiratory tract
NASA Astrophysics Data System (ADS)
Russo, Flavia; Boghi, Andrea; Gori, Fabio
2018-04-01
Magnetic nano drug targeting, with an external magnetic field, can potentially improve the drug absorption in specific locations of the body. However, the effectiveness of the procedure can be reduced due to the limitations of the magnetic field intensity. This work investigates this technique with the Computational Fluid Dynamics (CFD) approach. A single rectangular coil generates the external magnetic field. A patient-specific geometry of the Trachea, with its primary and secondary bronchi, is reconstructed from Digital Imaging and Communications in Medicine (DICOM) formatted images, throughout the Vascular Modelling Tool Kit (VMTK) software. A solver, coupling the Lagrangian dynamics of the magnetic nanoparticles with the Eulerian dynamics of the air, is used to perform the simulations. The resistive pressure, the pulsatile inlet velocity and the rectangular coil magnetic field are the boundary conditions. The dynamics of the injected particles is investigated without and with the magnetic probe. The flow field promotes particles adhesion to the tracheal wall. The particles volumetric flow rate in both cases has been calculated. The magnetic probe is shown to increase the particles flow in the target region, but at a limited extent. This behavior has been attributed to the small particle size and the probe configuration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Ke; Euser, Bryan J.; Rougier, Esteban
Sheared granular layers undergoing stick-slip behavior are broadly employed to study the physics and dynamics of earthquakes. In this paper, a two-dimensional implementation of the combined finite-discrete element method (FDEM), which merges the finite element method (FEM) and the discrete element method (DEM), is used to explicitly simulate a sheared granular fault system including both gouge and plate, and to investigate the influence of different normal loads on seismic moment, macroscopic friction coefficient, kinetic energy, gouge layer thickness, and recurrence time between slips. In the FDEM model, the deformation of plates and particles is simulated using the FEM formulation whilemore » particle-particle and particle-plate interactions are modeled using DEM-derived techniques. The simulated seismic moment distributions are generally consistent with those obtained from the laboratory experiments. In addition, the simulation results demonstrate that with increasing normal load, (i) the kinetic energy of the granular fault system increases; (ii) the gouge layer thickness shows a decreasing trend; and (iii) the macroscopic friction coefficient does not experience much change. Analyses of the slip events reveal that, as the normal load increases, more slip events with large kinetic energy release and longer recurrence time occur, and the magnitude of gouge layer thickness decrease also tends to be larger; while the macroscopic friction coefficient drop decreases. Finally, the simulations not only reveal the influence of normal loads on the dynamics of sheared granular fault gouge, but also demonstrate the capabilities of FDEM for studying stick-slip dynamic behavior of granular fault systems.« less
Gao, Ke; Euser, Bryan J.; Rougier, Esteban; ...
2018-06-20
Sheared granular layers undergoing stick-slip behavior are broadly employed to study the physics and dynamics of earthquakes. In this paper, a two-dimensional implementation of the combined finite-discrete element method (FDEM), which merges the finite element method (FEM) and the discrete element method (DEM), is used to explicitly simulate a sheared granular fault system including both gouge and plate, and to investigate the influence of different normal loads on seismic moment, macroscopic friction coefficient, kinetic energy, gouge layer thickness, and recurrence time between slips. In the FDEM model, the deformation of plates and particles is simulated using the FEM formulation whilemore » particle-particle and particle-plate interactions are modeled using DEM-derived techniques. The simulated seismic moment distributions are generally consistent with those obtained from the laboratory experiments. In addition, the simulation results demonstrate that with increasing normal load, (i) the kinetic energy of the granular fault system increases; (ii) the gouge layer thickness shows a decreasing trend; and (iii) the macroscopic friction coefficient does not experience much change. Analyses of the slip events reveal that, as the normal load increases, more slip events with large kinetic energy release and longer recurrence time occur, and the magnitude of gouge layer thickness decrease also tends to be larger; while the macroscopic friction coefficient drop decreases. Finally, the simulations not only reveal the influence of normal loads on the dynamics of sheared granular fault gouge, but also demonstrate the capabilities of FDEM for studying stick-slip dynamic behavior of granular fault systems.« less
A Grid-Free Approach for Plasma Simulations (Grid-Free Plasma Simulation Techniques)
2007-07-10
with complex geometry , e.g., space - space at t = 0 and the evolution of the system is obtained by craft thuster plume interactions [1], plasma sensors...position x with velocity v at time t, 4) is the electrostatic potential, qj is the charge on species j, mj is the mass of a particle of species j, p is...description of the Vlasov equation (1) with an efficient grid-free field solver for the
A Eulerian-Lagrangian Model to Simulate Two-Phase/Particulate Flows
NASA Technical Reports Server (NTRS)
Apte, S. V.; Mahesh, K.; Lundgren, T.
2003-01-01
Figure 1 shows a snapshot of liquid fuel spray coming out of an injector nozzle in a realistic gas-turbine combustor. Here the spray atomization was simulated using a stochastic secondary breakup model (Apte et al. 2003a) with point-particle approximation for the droplets. Very close to the injector, it is observed that the spray density is large and the droplets cannot be treated as point-particles. The volume displaced by the liquid in this region is significant and can alter the gas-phase ow and spray evolution. In order to address this issue, one can compute the dense spray regime by an Eulerian-Lagrangian technique using advanced interface tracking/level-set methods (Sussman et al. 1994; Tryggvason et al. 2001; Herrmann 2003). This, however, is computationally intensive and may not be viable in realistic complex configurations. We therefore plan to develop a methodology based on Eulerian-Lagrangian technique which will allow us to capture the essential features of primary atomization using models to capture interactions between the fluid and droplets and which can be directly applied to the standard atomization models used in practice. The numerical scheme for unstructured grids developed by Mahesh et al. (2003) for incompressible flows is modified to take into account the droplet volume fraction. The numerical framework is directly applicable to realistic combustor geometries. Our main objectives in this work are: Develop a numerical formulation based on Eulerian-Lagrangian techniques with models for interaction terms between the fluid and particles to capture the Kelvin- Helmholtz type instabilities observed during primary atomization. Validate this technique for various two-phase and particulate flows. Assess its applicability to capture primary atomization of liquid jets in conjunction with secondary atomization models.
Ferreiro-Rangel, Carlos A; Gelb, Lev D
2013-06-13
Structural and mechanical properties of silica aerogels are studied using a flexible coarse-grained model and a variety of simulation techniques. The model, introduced in a previous study (J. Phys. Chem. C 2007, 111, 15792-15802), consists of spherical "primary" gel particles that interact through weak nonbonded forces and through microscopically motivated interparticle bonds that may break and form during the simulations. Aerogel models are prepared using a three-stage protocol consisting of separate simulations of gelation, aging, and a final relaxation during which no further bond formation is permitted. Models of varying particle size, density, and size dispersity are considered. These are characterized in terms of fractal dimensions and pore size distributions, and generally good agreement with experimental data is obtained for these metrics. The bulk moduli of these materials are studied in detail. Two different techniques for obtaining the bulk modulus are considered, fluctuation analysis and direct compression/expansion simulations. We find that the fluctuation result can be subject to systematic error due to coupling with the simulation barostat but, if performed carefully, yields results equivalent with those of compression/expansion experiments. The dependence of the bulk modulus on density follows a power law with an exponent between 3.00 and 3.15, in agreement with reported experimental results. The best correlate for the bulk modulus appears to be the volumetric bond density, on which there is also a power law dependence. Polydisperse models exhibit lower bulk moduli than comparable monodisperse models, which is due to lower bond densities in the polydisperse materials.
NASA Astrophysics Data System (ADS)
Barodka, Siarhei; Kliutko, Yauhenia; Krasouski, Alexander; Papko, Iryna; Svetashev, Alexander; Turishev, Leonid
2013-04-01
Nowadays numerical simulation of thundercloud formation processes is of great interest as an actual problem from the practical point of view. Thunderclouds significantly affect airplane flights, and mesoscale weather forecast has much to contribute to facilitate the aviation forecast procedures. An accurate forecast can certainly help to avoid aviation accidents due to weather conditions. The present study focuses on modelling of the convective clouds development and thunder clouds detection on the basis of mesoscale atmospheric processes simulation, aiming at significantly improving the aeronautical forecast. In the analysis, the primary weather radar information has been used to be further adapted for mesoscale forecast systems. Two types of domains have been selected for modelling: an internal one (with radius of 8 km), and an external one (with radius of 300 km). The internal domain has been directly applied to study the local clouds development, and the external domain data has been treated as initial and final conditions for cloud cover formation. The domain height has been chosen according to the civil aviation forecast data (i.e. not exceeding 14 km). Simulations of weather conditions and local clouds development have been made within selected domains with the WRF modelling system. In several cases, thunderclouds are detected within the convective clouds. To specify the given category of clouds, we employ a simulation technique of solid phase formation processes in the atmosphere. Based on modelling results, we construct vertical profiles indicating the amount of solid phase in the atmosphere. Furthermore, we obtain profiles demonstrating the amount of ice particles and large particles (hailstones). While simulating the processes of solid phase formation, we investigate vertical and horizontal air flows. Consequently, we attempt to separate the total amount of solid phase into categories of small ice particles, large ice particles and hailstones. Also, we strive to reveal and differentiate the basic atmospheric parameters of sublimation and coagulation processes, aiming to predict ice particles precipitation. To analyze modelling results we apply the VAPOR three-dimensional visualization package. For the chosen domains, a diurnal synoptic situation has been simulated, including rain, sleet, ice pellets, and hail. As a result, we have obtained a large scope of data describing various atmospheric parameters: cloud cover, major wind components, basic levels of isobaric surfaces, and precipitation rate. Based on this data, we show both distinction in precipitation formation due to various heights and its differentiation of the ice particles. The relation between particle rise in the atmosphere and its size is analyzed: at 8-10 km altitude large ice particles, resulted from coagulation, dominate, while at 6-7 km altitude one can find snow and small ice particles formed by condensation growth. Also, mechanical trajectories of solid precipitation particles for various ice formation processes have been calculated.
A detailed comparison of single-camera light-field PIV and tomographic PIV
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.
2018-03-01
This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.
The relative entropy is fundamental to adaptive resolution simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kreis, Karsten; Graduate School Materials Science in Mainz, Staudingerweg 9, 55128 Mainz; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de
Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy withmore » respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.« less
The relative entropy is fundamental to adaptive resolution simulations
NASA Astrophysics Data System (ADS)
Kreis, Karsten; Potestio, Raffaello
2016-07-01
Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy with respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.
Viscosity of α-pinene secondary organic material and implications for particle growth and reactivity
Renbaum-Wolff, Lindsay; Grayson, James W.; Bateman, Adam P.; Kuwata, Mikinori; Sellier, Mathieu; Murray, Benjamin J.; Shilling, John E.; Martin, Scot T.; Bertram, Allan K.
2013-01-01
Particles composed of secondary organic material (SOM) are abundant in the lower troposphere. The viscosity of these particles is a fundamental property that is presently poorly quantified yet required for accurate modeling of their formation, growth, evaporation, and environmental impacts. Using two unique techniques, namely a “bead-mobility” technique and a “poke-flow” technique, in conjunction with simulations of fluid flow, the viscosity of the water-soluble component of SOM produced by α-pinene ozonolysis is quantified for 20- to 50-μm particles at 293–295 K. The viscosity is comparable to that of honey at 90% relative humidity (RH), similar to that of peanut butter at 70% RH, and at least as viscous as bitumen at ≤30% RH, implying that the studied SOM ranges from liquid to semisolid or solid across the range of atmospheric RH. These data combined with simple calculations or previous modeling studies are used to show the following: (i) the growth of SOM by the exchange of organic molecules between gas and particle may be confined to the surface region of the particles for RH ≤ 30%; (ii) at ≤30% RH, the particle-mass concentrations of semivolatile and low-volatility organic compounds may be overpredicted by an order of magnitude if instantaneous equilibrium partitioning is assumed in the bulk of SOM particles; and (iii) the diffusivity of semireactive atmospheric oxidants such as ozone may decrease by two to five orders of magnitude for a drop in RH from 90% to 30%. These findings have possible consequences for predictions of air quality, visibility, and climate. PMID:23620520
Reduced-Order Direct Numerical Simulation of Solute Transport in Porous Media
NASA Astrophysics Data System (ADS)
Mehmani, Yashar; Tchelepi, Hamdi
2017-11-01
Pore-scale models are an important tool for analyzing fluid dynamics in porous materials (e.g., rocks, soils, fuel cells). Current direct numerical simulation (DNS) techniques, while very accurate, are computationally prohibitive for sample sizes that are statistically representative of the porous structure. Reduced-order approaches such as pore-network models (PNM) aim to approximate the pore-space geometry and physics to remedy this problem. Predictions from current techniques, however, have not always been successful. This work focuses on single-phase transport of a passive solute under advection-dominated regimes and delineates the minimum set of approximations that consistently produce accurate PNM predictions. Novel network extraction (discretization) and particle simulation techniques are developed and compared to high-fidelity DNS simulations for a wide range of micromodel heterogeneities and a single sphere pack. Moreover, common modeling assumptions in the literature are analyzed and shown that they can lead to first-order errors under advection-dominated regimes. This work has implications for optimizing material design and operations in manufactured (electrodes) and natural (rocks) porous media pertaining to energy systems. This work was supported by the Stanford University Petroleum Research Institute for Reservoir Simulation (SUPRI-B).
N-S/DSMC hybrid simulation of hypersonic flow over blunt body including wakes
NASA Astrophysics Data System (ADS)
Li, Zhonghua; Li, Zhihui; Li, Haiyan; Yang, Yanguang; Jiang, Xinyu
2014-12-01
A hybrid N-S/DSMC method is presented and applied to solve the three-dimensional hypersonic transitional flows by employing the MPC (modular Particle-Continuum) technique based on the N-S and the DSMC method. A sub-relax technique is adopted to deal with information transfer between the N-S and the DSMC. The hypersonic flows over a 70-deg spherically blunted cone under different Kn numbers are simulated using the CFD, DSMC and hybrid N-S/DSMC method. The present computations are found in good agreement with DSMC and experimental results. The present method provides an efficient way to predict the hypersonic aerodynamics in near-continuum transitional flow regime.
Halo abundance matching: accuracy and conditions for numerical convergence
NASA Astrophysics Data System (ADS)
Klypin, Anatoly; Prada, Francisco; Yepes, Gustavo; Heß, Steffen; Gottlöber, Stefan
2015-03-01
Accurate predictions of the abundance and clustering of dark matter haloes play a key role in testing the standard cosmological model. Here, we investigate the accuracy of one of the leading methods of connecting the simulated dark matter haloes with observed galaxies- the halo abundance matching (HAM) technique. We show how to choose the optimal values of the mass and force resolution in large volume N-body simulations so that they provide accurate estimates for correlation functions and circular velocities for haloes and their subhaloes - crucial ingredients of the HAM method. At the 10 per cent accuracy, results converge for ˜50 particles for haloes and ˜150 particles for progenitors of subhaloes. In order to achieve this level of accuracy a number of conditions should be satisfied. The force resolution for the smallest resolved (sub)haloes should be in the range (0.1-0.3)rs, where rs is the scale radius of (sub)haloes. The number of particles for progenitors of subhaloes should be ˜150. We also demonstrate that the two-body scattering plays a minor role for the accuracy of N-body simulations thanks to the relatively small number of crossing-times of dark matter in haloes, and the limited force resolution of cosmological simulations.
Kosmidis, Kosmas; Argyrakis, Panos; Macheras, Panos
2003-07-01
To verify the Higuchi law and study the drug release from cylindrical and spherical matrices by means of Monte Carlo computer simulation. A one-dimensional matrix, based on the theoretical assumptions of the derivation of the Higuchi law, was simulated and its time evolution was monitored. Cylindrical and spherical three-dimensional lattices were simulated with sites at the boundary of the lattice having been denoted as leak sites. Particles were allowed to move inside it using the random walk model. Excluded volume interactions between the particles was assumed. We have monitored the system time evolution for different lattice sizes and different initial particle concentrations. The Higuchi law was verified using the Monte Carlo technique in a one-dimensional lattice. It was found that Fickian drug release from cylindrical matrices can be approximated nicely with the Weibull function. A simple linear relation between the Weibull function parameters and the specific surface of the system was found. Drug release from a matrix, as a result of a diffusion process assuming excluded volume interactions between the drug molecules, can be described using a Weibull function. This model, although approximate and semiempirical, has the benefit of providing a simple physical connection between the model parameters and the system geometry, which was something missing from other semiempirical models.
Decoupling the Role of Particle Inertia and Gravity on Particle Dispersion
NASA Technical Reports Server (NTRS)
Squires, Kyle D.
2002-01-01
Particle dispersion and the influence that particle momentum exchange has on the properties of a turbulent carrier flow in micro-gravity environments challenge present understanding and predictive schemes. The objective of this effort has been to develop and assess high-fidelity simulation tools for predicting particle transport within micro-gravity environments suspended in turbulent flows. The computational technique is based on Direct Numerical Simulation (DNS) of the incompressible Navier-Stokes equations. The particular focus of the present work is on the class of dilute flows in which particle volume fractions and inter-particle collisions are negligible. Particle motion is assumed to be governed by drag with particle relaxation times ranging from the Kolmogorov scale to the Eulerian timescale of the turbulence and particle mass loadings up to one. The velocity field was made statistically stationary by forcing the low wavenumbers of the flow. The calculations were performed using 96(exp 3) collocation points and the Taylor-scale Reynolds number for the stationary flow was 62. The effect of particles on the turbulence was included in the Navier-Stokes equations using the point-force approximation in which 96(exp 3) particles were used in the calculations. DNS results show that particles increasingly dissipate fluid kinetic energy with increased loading, with the reduction in kinetic energy being relatively independent of the particle relaxation time. Viscous dissipation in the fluid decreases with increased loading and is larger for particles with smaller relaxation times. Fluid energy spectra show that there is a non-uniform distortion of the turbulence with a relative increase in small-scale energy. The non-uniform distortion significantly affects the transport of the dissipation rate, with the production and destruction of dissipation exhibiting completely different behaviors. The spectrum of the fluid-particle energy exchange rate shows that the fluid drags particles at low wavenumbers while the converse is true at high wavenumbers for small particles. A spectral analysis shows that the increase of the high wavenumber portion of the fluid energy spectrum can be attributed to transfer of the fluid-particle covariance by the fluid turbulence. This in turn explains the relative increase of small-scale energy caused by small particles observed in the present simulations as well as those of others.
Molecular dynamical simulations of melting Al nanoparticles using a reaxff reactive force field
NASA Astrophysics Data System (ADS)
Liu, Junpeng; Wang, Mengjun; Liu, Pingan
2018-06-01
Molecular dynamics simulations were performed to study thermal properties and melting points of Al nanoparticles by using a reactive force field under canonical (NVT) ensembles. Al nanoparticles (particle size 2–4 nm) were considered in simulations. A combination of structural and thermodynamic parameters such as the Lindemann index, heat capacities, potential energy and radial-distribution functions was employed to decide melting points. We used annealing technique to obtain the initial Al nanoparticle model. Comparison was made between ReaxFF results and other simulation results. We found that ReaxFF force field is reasonable to describe Al cluster melting behavior. The linear relationship between particle size and melting points was found. After validating the ReaxFF force field, more attention was paid on thermal properties of Al nanoparticles with different defect concentrations. 4 nm Al nanoparticles with different defect concentrations (5%–20%) were considered in this paper. Our results revealed that: the melting points are irrelevant with defect concentration at a certain particle size. The extra storage energy of Al nanoparticles is proportional to nanoparticles’ defect concentration, when defect concentration is 5%–15%. While the particle with 20% defect concentration is similar to the cluster with 10% defect concentration. After melting, the extra energy of all nanoparticles decreases sharply, and the extra storage energy is nearly zero at 600 K. The centro-symmetry parameter analysis shows structure evolution of different models during melting processes.
The Prospect for Remote Sensing of Cirrus Clouds with a Submillimeter-Wave Spectrometer
NASA Technical Reports Server (NTRS)
Evans, K. Franklin; Evans, Aaron H.; Nolt, Ira G.; Marshall, B. Thomas
1999-01-01
Given the substantial radiative effects of cirrus clouds and the need to validate cirrus cloud mass in climate models, it is important to measure the global distribution of cirrus properties with satellite remote sensing. Existing cirrus remote sensing techniques, such as solar reflectance methods, measure cirrus ice water path (IWP) rather indirectly and with limited accuracy. Submillimeter/wave radiometry is an independent method of cirrus remote sensing based on ice particles scattering the upwelling radiance emitted by the lower atmosphere. A new aircraft instrument, the Far Infrared Sensor for Cirrus (FIRSC), is described. The FIRSC employs a Fourier Transform Spectrometer (FTS). which measures the upwelling radiance across the whole submillimeter region (0.1 1.0-mm wavelength). This wide spectral coverage gives high sensitivity to most cirrus particle sizes and allows accurate determination of the characteristic particle size. Radiative transfer modeling is performed to analyze the capabilities of the submillimeter FTS technique. A linear inversion analysis is done to show that cirrus IWP, particle size, and upper-tropospheric temperature and water vapor may be accurately measured, A nonlinear statistical algorithm is developed using a database of 20000 spectra simulated by randomly varying most relevant cirrus and atmospheric parameters. An empirical orthogonal function analysis reduces the 500-point spectrum (20 - 70/cm) to 15 "pseudo-channels" that are then input to a neural network to retrieve cirrus IWP and median particle diameter. A Monte Carlo accuracy study is performed with simulated spectra having realistic noise. The retrieval errors are low for IWP (rms less than a factor of 1.5) and for particle sizes (rins less than 30%) for IWP greater than 5 g/sq m and a wide range of median particle sizes. This detailed modeling indicates that there is good potential to accurately measure cirrus properties with a submillimeter FTS.
Hybrid molecular-continuum simulations using smoothed dissipative particle dynamics
Petsev, Nikolai D.; Leal, L. Gary; Shell, M. Scott
2015-01-01
We present a new multiscale simulation methodology for coupling a region with atomistic detail simulated via molecular dynamics (MD) to a numerical solution of the fluctuating Navier-Stokes equations obtained from smoothed dissipative particle dynamics (SDPD). In this approach, chemical potential gradients emerge due to differences in resolution within the total system and are reduced by introducing a pairwise thermodynamic force inside the buffer region between the two domains where particles change from MD to SDPD types. When combined with a multi-resolution SDPD approach, such as the one proposed by Kulkarni et al. [J. Chem. Phys. 138, 234105 (2013)], this method makes it possible to systematically couple atomistic models to arbitrarily coarse continuum domains modeled as SDPD fluids with varying resolution. We test this technique by showing that it correctly reproduces thermodynamic properties across the entire simulation domain for a simple Lennard-Jones fluid. Furthermore, we demonstrate that this approach is also suitable for non-equilibrium problems by applying it to simulations of the start up of shear flow. The robustness of the method is illustrated with two different flow scenarios in which shear forces act in directions parallel and perpendicular to the interface separating the continuum and atomistic domains. In both cases, we obtain the correct transient velocity profile. We also perform a triple-scale shear flow simulation where we include two SDPD regions with different resolutions in addition to a MD domain, illustrating the feasibility of a three-scale coupling. PMID:25637963
Classifying and modelling spiral structures in hydrodynamic simulations of astrophysical discs
NASA Astrophysics Data System (ADS)
Forgan, D. H.; Ramón-Fox, F. G.; Bonnell, I. A.
2018-05-01
We demonstrate numerical techniques for automatic identification of individual spiral arms in hydrodynamic simulations of astrophysical discs. Building on our earlier work, which used tensor classification to identify regions that were `spiral-like', we can now obtain fits to spirals for individual arm elements. We show this process can even detect spirals in relatively flocculent spiral patterns, but the resulting fits to logarithmic `grand-design' spirals are less robust. Our methods not only permit the estimation of pitch angles, but also direct measurements of the spiral arm width and pattern speed. In principle, our techniques will allow the tracking of material as it passes through an arm. Our demonstration uses smoothed particle hydrodynamics simulations, but we stress that the method is suitable for any finite-element hydrodynamics system. We anticipate our techniques will be essential to studies of star formation in disc galaxies, and attempts to find the origin of recently observed spiral structure in protostellar discs.
Numerical and experimental approaches to study soil transport and clogging in granular filters
NASA Astrophysics Data System (ADS)
Kanarska, Y.; Smith, J. J.; Ezzedine, S. M.; Lomov, I.; Glascoe, L. G.
2012-12-01
Failure of a dam by erosion ranks among the most serious accidents in civil engineering. The best way to prevent internal erosion is using adequate granular filters in the transition areas where important hydraulic gradients can appear. In case of cracking and erosion, if the filter is capable of retaining the eroded particles, the crack will seal and the dam safety will be ensured. Numerical modeling has proved to be a cost-effective tool for improving our understanding of physical processes. Traditionally, the consideration of flow and particle transport in porous media has focused on treating the media as continuum. Practical models typically address flow and transport based on the Darcy's law as a function of a pressure gradient and a medium-dependent permeability parameter. Additional macroscopic constitutes describe porosity, and permeability changes during the migration of a suspension through porous media. However, most of them rely on empirical correlations, which often need to be recalibrated for each application. Grain-scale modeling can be used to gain insight into scale dependence of continuum macroscale parameters. A finite element numerical solution of the Navier-Stokes equations for fluid flow together with Lagrange multiplier technique for solid particles was applied to the simulation of soil filtration in the filter layers of gravity dam. The numerical approach was validated through comparison of numerical simulations with the experimental results of base soil particle clogging in the filter layers performed at ERDC. The numerical simulation correctly predicted flow and pressure decay due to particle clogging. The base soil particle distribution was almost identical to those measured in the laboratory experiment. It is believed that the agreement between simulations and experimental data demonstrates the applicability of the proposed approach for prediction of the soil transport and clogging in embankment dams. To get more precise understanding of the soil transport in granular filters we investigated sensitivity of particle clogging mechanisms to various aspects such as particle size ration, the amplitude of hydraulic gradient, particle concentration and contact properties. By averaging the results derived from the grain-scale simulations, we investigated how those factors affect the semi-empirical multiphase model parameters in the large-scale simulation tool. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. The Department of Homeland Security Science and Technology Directorate provided funding for this research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makedonska, Nataliia; Painter, Scott L.; Bui, Quan M.
The discrete fracture network (DFN) model is a method to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. We present a new particle tracking capability, which is adapted to control volume (Voronoi polygons) flow solutions on unstructured grids (Delaunay triangulations) on three-dimensional DFNs. The locally mass-conserving finite-volume approach eliminates massmore » balance-related problems during particle tracking. The scalar fluxes calculated for each control volume face by the flow solver are used to reconstruct a Darcy velocity at each control volume centroid. The groundwater velocities can then be continuously interpolated to any point in the domain of interest. The control volumes at fracture intersections are split into four pieces, and the velocity is reconstructed independently on each piece, which results in multiple groundwater velocities at the intersection, one for each fracture on each side of the intersection line. This technique enables detailed particle transport representation through a complex DFN structure. Verified for small DFNs, the new simulation capability enables numerical experiments on advective transport in large DFNs to be performed. As a result, we demonstrate this particle transport approach on a DFN model using parameters similar to those of crystalline rock at a proposed geologic repository for spent nuclear fuel in Forsmark, Sweden.« less
Makedonska, Nataliia; Painter, Scott L.; Bui, Quan M.; ...
2015-09-16
The discrete fracture network (DFN) model is a method to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. We present a new particle tracking capability, which is adapted to control volume (Voronoi polygons) flow solutions on unstructured grids (Delaunay triangulations) on three-dimensional DFNs. The locally mass-conserving finite-volume approach eliminates massmore » balance-related problems during particle tracking. The scalar fluxes calculated for each control volume face by the flow solver are used to reconstruct a Darcy velocity at each control volume centroid. The groundwater velocities can then be continuously interpolated to any point in the domain of interest. The control volumes at fracture intersections are split into four pieces, and the velocity is reconstructed independently on each piece, which results in multiple groundwater velocities at the intersection, one for each fracture on each side of the intersection line. This technique enables detailed particle transport representation through a complex DFN structure. Verified for small DFNs, the new simulation capability enables numerical experiments on advective transport in large DFNs to be performed. As a result, we demonstrate this particle transport approach on a DFN model using parameters similar to those of crystalline rock at a proposed geologic repository for spent nuclear fuel in Forsmark, Sweden.« less
Orion Exploration Flight Test Post-Flight Inspection and Analysis
NASA Technical Reports Server (NTRS)
Miller, J. E.; Berger, E. L.; Bohl, W. E.; Christiansen, E. L.; Davis, B. A.; Deighton, K. D.; Enriquez, P. A.; Garcia, M. A.; Hyde, J. L.; Oliveras, O. M.
2017-01-01
The principal mechanism for developing orbital debris environment models, is to make observations of larger pieces of debris in the range of several centimeters and greater using radar and optical techniques. For particles that are smaller than this threshold, breakup and migration models of particles to returned surfaces in lower orbit are relied upon to quantify the flux. This reliance on models to derive spatial densities of particles that are of critical importance to spacecraft make the unique nature of the EFT-1's return surface a valuable metric. To this end detailed post-flight inspections have been performed of the returned EFT-1 backshell, and the inspections identified six candidate impact sites that were not present during the pre-flight inspections. This paper describes the post-flight analysis efforts to characterize the EFT-1 mission craters. This effort included ground based testing to understand small particle impact craters in the thermal protection material, the pre- and post-flight inspection, the crater analysis using optical, X-ray computed tomography (CT) and scanning electron microscope (SEM) techniques, and numerical simulations.
On-chip particle trapping and manipulation
NASA Astrophysics Data System (ADS)
Leake, Kaelyn Danielle
The ability to control and manipulate the world around us is human nature. Humans and our ancestors have used tools for millions of years. Only in recent years have we been able to control objects at such small levels. In order to understand the world around us it is frequently necessary to interact with the biological world. Optical trapping and manipulation offer a non-invasive way to move, sort and interact with particles and cells to see how they react to the world around them. Optical tweezers are ideal in their abilities but they require large, non-portable, and expensive setups limiting how and where we can use them. A cheap portable platform is required in order to have optical manipulation reach its full potential. On-chip technology offers a great solution to this challenge. We focused on the Liquid-Core Anti-Resonant Reflecting Optical Waveguide (liquid-core ARROW) for our work. The ARROW is an ideal platform, which has anti-resonant layers which allow light to be guided in liquids, allowing for particles to easily be manipulated. It is manufactured using standard silicon manufacturing techniques making it easy to produce. The planner design makes it easy to integrate with other technologies. Initially I worked to improve the ARROW chip by reducing the intersection losses and by reducing the fluorescence and background on the ARROW chip. The ARROW chip has already been used to trap and push particles along its channel but here I introduce several new methods of particle trapping and manipulation on the ARROW chip. Traditional two beam traps use two counter propagating beams. A trapping scheme that uses two orthogonal beams which counter to first instinct allow for trapping at their intersection is introduced. This scheme is thoroughly predicted and analyzed using realistic conditions. Simulations of this method were done using a program which looks at both the fluidics and optical sources to model complex situations. These simulations were also used to model and predict a sorting method which combines fluid flow with a single optical source to automatically sort dielectric particles by size in waveguide networks. These simulations were shown to be accurate when repeated on-chip. Lastly I introduce a particle trapping technique that uses Multimode Interference(MMI) patterns in order to trap multiple particles at once. The location of the traps can be adjusted as can the number of trapping location by changing the input wavelength. By changing the wavelength back and forth between two values this MMI can be used to pass a particle down the channel like a conveyor belt.
Charged-particle emission tomography
NASA Astrophysics Data System (ADS)
Ding, Yijun
Conventional charged-particle imaging techniques--such as autoradiography-- provide only two-dimensional (2D) images of thin tissue slices. To get volumetric information, images of multiple thin slices are stacked. This process is time consuming and prone to distortions, as registration of 2D images is required. We propose a direct three-dimensional (3D) autoradiography technique, which we call charged-particle emission tomography (CPET). This 3D imaging technique enables imaging of thick sections, thus increasing laboratory throughput and eliminating distortions due to registration. In CPET, molecules or cells of interest are labeled so that they emit charged particles without significant alteration of their biological function. Therefore, by imaging the source of the charged particles, one can gain information about the distribution of the molecules or cells of interest. Two special case of CPET include beta emission tomography (BET) and alpha emission tomography (alphaET), where the charged particles employed are fast electrons and alpha particles, respectively. A crucial component of CPET is the charged-particle detector. Conventional charged-particle detectors are sensitive only to the 2-D positions of the detected particles. We propose a new detector concept, which we call particle-processing detector (PPD). A PPD measures attributes of each detected particle, including location, direction of propagation, and/or the energy deposited in the detector. Reconstruction algorithms for CPET are developed, and reconstruction results from simulated data are presented for both BET and alphaET. The results show that, in addition to position, direction and energy provide valuable information for 3D reconstruction of CPET. Several designs of particle-processing detectors are described. Experimental results for one detector are discussed. With appropriate detector design and careful data analysis, it is possible to measure direction and energy, as well as position of each detected particle. The null functions of CPET with PPDs that measure different combinations of attributes are calculated through singular-value decomposition. In general, the more particle attributes are measured from each detection event, the smaller the null space of CPET is. In other words, the higher dimension the data space is, the more information about an object can be recovered from CPET.
NASA Astrophysics Data System (ADS)
Yadav, Basant; Ch, Sudheer; Mathur, Shashi; Adamowski, Jan
2016-12-01
In-situ bioremediation is the most common groundwater remediation procedure used for treating organically contaminated sites. A simulation-optimization approach, which incorporates a simulation model for groundwaterflow and transport processes within an optimization program, could help engineers in designing a remediation system that best satisfies management objectives as well as regulatory constraints. In-situ bioremediation is a highly complex, non-linear process and the modelling of such a complex system requires significant computational exertion. Soft computing techniques have a flexible mathematical structure which can generalize complex nonlinear processes. In in-situ bioremediation management, a physically-based model is used for the simulation and the simulated data is utilized by the optimization model to optimize the remediation cost. The recalling of simulator to satisfy the constraints is an extremely tedious and time consuming process and thus there is need for a simulator which can reduce the computational burden. This study presents a simulation-optimization approach to achieve an accurate and cost effective in-situ bioremediation system design for groundwater contaminated with BTEX (Benzene, Toluene, Ethylbenzene, and Xylenes) compounds. In this study, the Extreme Learning Machine (ELM) is used as a proxy simulator to replace BIOPLUME III for the simulation. The selection of ELM is done by a comparative analysis with Artificial Neural Network (ANN) and Support Vector Machine (SVM) as they were successfully used in previous studies of in-situ bioremediation system design. Further, a single-objective optimization problem is solved by a coupled Extreme Learning Machine (ELM)-Particle Swarm Optimization (PSO) technique to achieve the minimum cost for the in-situ bioremediation system design. The results indicate that ELM is a faster and more accurate proxy simulator than ANN and SVM. The total cost obtained by the ELM-PSO approach is held to a minimum while successfully satisfying all the regulatory constraints of the contaminated site.
On the design and optimisation of new fractal antenna using PSO
NASA Astrophysics Data System (ADS)
Rani, Shweta; Singh, A. P.
2013-10-01
An optimisation technique for newly shaped fractal structure using particle swarm optimisation with curve fitting is presented in this article. The aim of particle swarm optimisation is to find the geometry of the antenna for the required user-defined frequency. To assess the effectiveness of the presented method, a set of representative numerical simulations have been done and the results are compared with the measurements from experimental prototypes built according to the design specifications coming from the optimisation procedure. The proposed fractal antenna resonates at the 5.8 GHz industrial, scientific and medical band which is suitable for wireless telemedicine applications. The antenna characteristics have been studied using extensive numerical simulations and are experimentally verified. The antenna exhibits well-defined radiation patterns over the band.
Schottky Noise and Beam Transfer Functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaskiewicz, M.
2016-12-01
Beam transfer functions (BTF)s encapsulate the stability properties of charged particle beams. In general one excites the beam with a sinusoidal signal and measures the amplitude and phase of the beam response. Most systems are very nearly linear and one can use various Fourier techniques to reduce the number of measurements and/or simulations needed to fully characterize the response. Schottky noise is associated with the finite number of particles in the beam. This signal is always present. Since the Schottky current drives wakefields, the measured Schottky signal is influenced by parasitic impedances.
Taylor dispersion of colloidal particles in narrow channels
NASA Astrophysics Data System (ADS)
Sané, Jimaan; Padding, Johan T.; Louis, Ard A.
2015-09-01
We use a mesoscopic particle-based simulation technique to study the classic convection-diffusion problem of Taylor dispersion for colloidal discs in confined flow. When the disc diameter becomes non-negligible compared to the diameter of the pipe, there are important corrections to the original Taylor picture. For example, the colloids can flow more rapidly than the underlying fluid, and their Taylor dispersion coefficient is decreased. For narrow pipes, there are also further hydrodynamic wall effects. The long-time tails in the velocity autocorrelation functions are altered by the Poiseuille flow.
NASA Astrophysics Data System (ADS)
Kadoura, Ahmad; Sun, Shuyu; Salama, Amgad
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.
NASA Astrophysics Data System (ADS)
Gruzdev, Vitaly; Komolov, Vladimir; Li, Hao; Yu, Qingsong; Przhibel'skii, Sergey; Smirnov, Dmitry
2011-02-01
The objective of this combined experimental and theoretical research is to study the dynamics and mechanisms of nanoparticle interaction with ultrashort laser pulses and related modifications of substrate surface. For the experimental effort, metal (gold), dielectric (SiO2) and dielectric with metal coating (about 30 nm thick) spherical nanoparticles deposited on glass substrate are utilized. Size of the particles varies from 20 to 200 nm. Density of the particles varies from low (mean inter-particle distance 100 nm) to high (mean inter-particle distance less than 1 nm). The nanoparticle assemblies and the corresponding empty substrate surfaces are irradiated with single 130-fs laser pulses at wavelength 775 nm and different levels of laser fluence. Large diameter of laser spot (0.5-2 mm) provides gradient variations of laser intensity over the spot and allows observing different laser-nanoparticle interactions. The interactions vary from total removal of the nanoparticles in the center of laser spot to gentle modification of their size and shape and totally non-destructive interaction. The removed particles frequently form specific sub-micrometer-size pits on the substrate surface at their locations. The experimental effort is supported by simulations of the nanoparticle interactions with high-intensity ultrashort laser pulse. The simulation employs specific modification of the molecular dynamics approach applied to model the processes of non-thermal particle ablation following laser-induced electron emission. This technique delivers various characteristics of the ablation plume from a single nanoparticle including energy and speed distribution of emitted ions, variations of particle size and overall dynamics of its ablation. The considered geometry includes single isolated particle as well a single particle on a flat substrate that corresponds to the experimental conditions. The simulations confirm existence of the different regimes of laser-nanoparticle interactions depending on laser intensity and wavelength. In particular, implantation of ions departing from the nanoparticles towards the substrate is predicted.
Regolith Activation on the Lunar Surface and Its Ground Test Simulation
NASA Technical Reports Server (NTRS)
Gaier, James R.
2009-01-01
Activation of the surfaces of lunar regolith particles can occur through interactions with solar electromagnetic radiation, solar and galactic particle radiation and micrometeoroid bombardment. An attempt has been made to quantify the relative importance of each of those effects. The effects of these activated surfaces may be to enhance the adhesion and toxicity of the particles. Also key to the importance of activation is the lifetimes of activated states in various environments which is controlled by their passivation rate as well as their activation rate. Although techniques exist to characterize the extent of activation of particles in biological system, it is important to be able to quantify the activation state on the lunar surface, in ground-test vacuum systems, and in habitat atmospheres as well.
Particle-In-Cell Analysis of an Electric Antenna for the BepiColombo/MMO spacecraft
NASA Astrophysics Data System (ADS)
Miyake, Yohei; Usui, Hideyuki; Kojima, Hirotsugu
The BepiColombo/MMO spacecraft is planned to provide a first electric field measurement in Mercury's magnetosphere by mounting two types of the electric antennas: WPT and MEFISTO. The sophisticated calibration of such measurements should be performed based on precise knowledge of the antenna characteristics in space plasma. However, it is difficult to know prac-tical antenna characteristics considering the plasma kinetics and spacecraft-plasma interactions by means of theoretical approaches. Furthermore, some modern antenna designing techniques such as a "hockey puck" principle is applied to MEFISTO, which introduces much complexity in its overall configuration. Thus a strong demand arises regarding the establishment of a nu-merical method that can solve the complex configuration and plasma dynamics for evaluating the electric properties of the modern instrument. For the self-consistent antenna analysis, we have developed a particle simulation code named EMSES based on the particle-in-cell technique including a treatment antenna conductive sur-faces. In this paper, we mainly focus on electrostatic (ES) features and photoelectron distri-bution in the vicinity of MEFISTO. Our simulation model includes (1) a photoelectron guard electrode, (2) a bias current provided from the spacecraft body to the sensing element, (3) a floating potential treatment for the spacecraft body, and (4) photoelectron emission from sunlit surfaces of the conductive bodies. Of these, the photoelectron guard electrode is a key technol-ogy for producing an optimal condition of plasma environment around MEFISTO. Specifically, we introduced a pre-amplifier housing called puck located between the conductive boom and the sensor wire. The photoelectron guard is then simulated by forcibly fixing the potential difference between the puck surface and the spacecraft body. For the modeling, we use the Capacity Matrix technique in order to assure the conservation condition of total charge owned by the entire spacecraft body. We report some numerical analyses on the influence of the guard electrode on the surrounding plasma environment by using the developed model.
NASA Astrophysics Data System (ADS)
Larsen, J. D.; Schaap, M. G.
2013-12-01
Recent advances in computing technology and experimental techniques have made it possible to observe and characterize fluid dynamics at the micro-scale. Many computational methods exist that can adequately simulate fluid flow in porous media. Lattice Boltzmann methods provide the distinct advantage of tracking particles at the microscopic level and returning macroscopic observations. While experimental methods can accurately measure macroscopic fluid dynamics, computational efforts can be used to predict and gain insight into fluid dynamics by utilizing thin sections or computed micro-tomography (CMT) images of core sections. Although substantial effort have been made to advance non-invasive imaging methods such as CMT, fluid dynamics simulations, and microscale analysis, a true three dimensional image segmentation technique has not been developed until recently. Many competing segmentation techniques are utilized in industry and research settings with varying results. In this study lattice Boltzmann method is used to simulate stokes flow in a macroporous soil column. Two dimensional CMT images were used to reconstruct a three dimensional representation of the original sample. Six competing segmentation standards were used to binarize the CMT volumes which provide distinction between solid phase and pore space. The permeability of the reconstructed samples was calculated, with Darcy's Law, from lattice Boltzmann simulations of fluid flow in the samples. We compare simulated permeability from differing segmentation algorithms to experimental findings.
Modeling the complex shape evolution of sedimenting particle swarms in fractures
NASA Astrophysics Data System (ADS)
Mitchell, C. A.; Nitsche, L.; Pyrak-Nolte, L. J.
2016-12-01
The flow of micro- and nano-particles through subsurface systems can occur in several environments, such as hydraulic fracturing or enhanced oil recovery. Computer simulations were performed to advance our understanding of the complexity of subsurface particle swarm transport in fractures. Previous experiments observed that particle swarms in fractures with uniform apertures exhibit enhanced transport speeds and suppressed bifurcations for an optimal range of apertures. Numerical simulations were performed for low Reynolds number, no interfacial tension and uniform viscosity conditions with particulate swarms represented by point-particles that mutually interact through their (regularized) Stokeslet fields. A P3 M technique accelerates the summations for swarms exceeding 105 particles. Fracture wall effects were incorporated using a least-squares variant of the method of fundamental solutions, with grid mapping of the surface force and source elements within the fast-summation scheme. The numerical study was executed on the basis of dimensionless variables and parameters, in the interest of examining the fundamental behavior and relationships of particle swarms in the presence of uniform apertures. Model parameters were representative of particle swarms experiments to enable direct comparison of the results with the experimental observations. The simulations confirmed that the principal phenomena observed in the experiments can be explained within the realm of Stokes flow. The numerical investigation effectively replicated swarm evolution in a uniform fracture and captured the coalescence, torus and tail formation, and ultimate breakup of the particle swarm as it fell under gravity in a quiescent fluid. The rate of swarm evolution depended on the number of particles in a swarm. When an ideal number of particles was used, swarm transport was characterized by an enhanced velocity regime as observed in the laboratory data. Understanding the physics particle swarms in fractured media will improve the ability to perform controlled micro-particulate transport through rock. Acknowledgment: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Geosciences Research Program under Award Number (DE-FG02-09ER16022).
NASA Astrophysics Data System (ADS)
Tomori, Zoltan; Keša, Peter; Nikorovič, Matej; Kaůka, Jan; Zemánek, Pavel
2016-12-01
We proposed the improved control software for the holographic optical tweezers (HOT) proper for simple semi-automated sorting. The controller receives data from both the human interface sensors and the HOT microscope camera and processes them. As a result, the new positions of active laser traps are calculated, packed into the network format and sent to the remote HOT. Using the photo-polymerization technique, we created a sorting container consisting of two parallel horizontal walls where one wall contains "gates" representing a place where the trapped particle enters into the container. The positions of particles and gates are obtained by image analysis technique which can be exploited to achieve the higher level of automation. Sorting is documented on computer game simulation and the real experiment.
NASA Astrophysics Data System (ADS)
He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong
2016-09-01
We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.
Simulations of the Evolution of Vapor Ejected by the LCROSS Impact on the Moon
NASA Astrophysics Data System (ADS)
Hurley, D. M.; Killen, R. M.; Team, L.; Potter, A. E.
2009-12-01
We present simulations of the vapor plume evolution resulting from the LCROSS impact onto the Moon. The simulation employs the Monte Carlo technique to follow the trajectory of particles assuming a collisionless atmosphere from the time the particle reaches the collisionless regime until the particle is lost from the Moon. We use realistic topography and examine how different implementation of physics within the model affect the evolution of the vapor plume. We simulate Na, H2O, OH, H, O, and Ar. If observations from LAMP and ground-based observations of Na are successful (they are TBD at the time of writing the abstract), we present the observations and use the model to interpret them. LAMP is the Lyman Alpha Mapping Project onboard Lunar Reconnaissance Orbiter. After impact, LAMP will observe FUV spectra in search of H and Ar in the atmosphere. We propose to use the McMath-Pierce Main to observe the impact plume, which is scheduled to occur on October 9, 2009 at 11:30 UT (7:30 a.m. EDT, 4:30 a.m. PDT), +/- 30 minutes. The spectrum of the impact plume will be measured using the Stellar Spectrograph and the McMath-Pierce Main telescope. The spectral range will be chosen to observed sodium. The purpose of this observation is to calibrate the impact. We know the sodium content of the regolith. A measure of the extra sodium content in the impact plume will serve to calibrate the impact. We will observe the impact region with the East Auxillary Telescope in white light to estimate the amount of dust produced by the impact. Distribution of simulated Ar particles 2 hours after the LCROSS impact.
NASA Astrophysics Data System (ADS)
Abratenko, P.; Acciarri, R.; Adams, C.; An, R.; Anthony, J.; Asaadi, J.; Auger, M.; Bagby, L.; Balasubramanian, S.; Baller, B.; Barnes, C.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Bugel, L.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Cohen, E.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garcia-Gamez, D.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; Huang, E.-C.; James, C.; de Vries, J. Jan; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Joshi, J.; Jostlein, H.; Kaleko, D.; Kalousis, L. N.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Piasetzky, E.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; von Rohr, C. Rudolf; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van de Water, R. G.; Viren, B.; Weber, M.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Yates, L.; Zeller, G. P.; Zennamo, J.; Zhang, C.
2017-10-01
We discuss a technique for measuring a charged particle's momentum by means of multiple Coulomb scattering (MCS) in the MicroBooNE liquid argon time projection chamber (LArTPC). This method does not require the full particle ionization track to be contained inside of the detector volume as other track momentum reconstruction methods do (range-based momentum reconstruction and calorimetric momentum reconstruction). We motivate use of this technique, describe a tuning of the underlying phenomenological formula, quantify its performance on fully contained beam-neutrino-induced muon tracks both in simulation and in data, and quantify its performance on exiting muon tracks in simulation. Using simulation, we have shown that the standard Highland formula should be re-tuned specifically for scattering in liquid argon, which significantly improves the bias and resolution of the momentum measurement. With the tuned formula, we find agreement between data and simulation for contained tracks, with a small bias in the momentum reconstruction and with resolutions that vary as a function of track length, improving from about 10% for the shortest (one meter long) tracks to 5% for longer (several meter) tracks. For simulated exiting muons with at least one meter of track contained, we find a similarly small bias, and a resolution which is less than 15% for muons with momentum below 2 GeV/c. Above 2 GeV/c, results are given as a first estimate of the MCS momentum measurement capabilities of MicroBooNE for high momentum exiting tracks.
Realistic micromechanical modeling and simulation of two-phase heterogeneous materials
NASA Astrophysics Data System (ADS)
Sreeranganathan, Arun
This dissertation research focuses on micromechanical modeling and simulations of two-phase heterogeneous materials exhibiting anisotropic and non-uniform microstructures with long-range spatial correlations. Completed work involves development of methodologies for realistic micromechanical analyses of materials using a combination of stereological techniques, two- and three-dimensional digital image processing, and finite element based modeling tools. The methodologies are developed via its applications to two technologically important material systems, namely, discontinuously reinforced aluminum composites containing silicon carbide particles as reinforcement, and boron modified titanium alloys containing in situ formed titanium boride whiskers. Microstructural attributes such as the shape, size, volume fraction, and spatial distribution of the reinforcement phase in these materials were incorporated in the models without any simplifying assumptions. Instrumented indentation was used to determine the constitutive properties of individual microstructural phases. Micromechanical analyses were performed using realistic 2D and 3D models and the results were compared with experimental data. Results indicated that 2D models fail to capture the deformation behavior of these materials and 3D analyses are required for realistic simulations. The effect of clustering of silicon carbide particles and associated porosity on the mechanical response of discontinuously reinforced aluminum composites was investigated using 3D models. Parametric studies were carried out using computer simulated microstructures incorporating realistic microstructural attributes. The intrinsic merit of this research is the development and integration of the required enabling techniques and methodologies for representation, modeling, and simulations of complex geometry of microstructures in two- and three-dimensional space facilitating better understanding of the effects of microstructural geometry on the mechanical behavior of materials.
Abratenko, P.
2017-10-18
Here, we discuss a technique for measuring a charged particle's momentum by means of multiple Coulomb scattering (MCS) in the MicroBooNE liquid argon time projection chamber (LArTPC). This method does not require the full particle ionization track to be contained inside of the detector volume as other track momentum reconstruction methods do (range-based momentum reconstruction and calorimetric momentum reconstruction). We motivate use of this technique, describe a tuning of the underlying phenomenological formula, quantify its performance on fully contained beam-neutrino-induced muon tracks both in simulation and in data, and quantify its performance on exiting muon tracks in simulation. Using simulation,more » we have shown that the standard Highland formula should be re-tuned specifically for scattering in liquid argon, which significantly improves the bias and resolution of the momentum measurement. With the tuned formula, we find agreement between data and simulation for contained tracks, with a small bias in the momentum reconstruction and with resolutions that vary as a function of track length, improving from about 10% for the shortest (one meter long) tracks to 5% for longer (several meter) tracks. For simulated exiting muons with at least one meter of track contained, we find a similarly small bias, and a resolution which is less than 15% for muons with momentum below 2 GeV/c. Above 2 GeV/c, results are given as a first estimate of the MCS momentum measurement capabilities of MicroBooNE for high momentum exiting tracks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abratenko, P.
Here, we discuss a technique for measuring a charged particle's momentum by means of multiple Coulomb scattering (MCS) in the MicroBooNE liquid argon time projection chamber (LArTPC). This method does not require the full particle ionization track to be contained inside of the detector volume as other track momentum reconstruction methods do (range-based momentum reconstruction and calorimetric momentum reconstruction). We motivate use of this technique, describe a tuning of the underlying phenomenological formula, quantify its performance on fully contained beam-neutrino-induced muon tracks both in simulation and in data, and quantify its performance on exiting muon tracks in simulation. Using simulation,more » we have shown that the standard Highland formula should be re-tuned specifically for scattering in liquid argon, which significantly improves the bias and resolution of the momentum measurement. With the tuned formula, we find agreement between data and simulation for contained tracks, with a small bias in the momentum reconstruction and with resolutions that vary as a function of track length, improving from about 10% for the shortest (one meter long) tracks to 5% for longer (several meter) tracks. For simulated exiting muons with at least one meter of track contained, we find a similarly small bias, and a resolution which is less than 15% for muons with momentum below 2 GeV/c. Above 2 GeV/c, results are given as a first estimate of the MCS momentum measurement capabilities of MicroBooNE for high momentum exiting tracks.« less
Quantitative analysis of packed and compacted granular systems by x-ray microtomography
NASA Astrophysics Data System (ADS)
Fu, Xiaowei; Milroy, Georgina E.; Dutt, Meenakshi; Bentham, A. Craig; Hancock, Bruno C.; Elliott, James A.
2005-04-01
The packing and compaction of powders are general processes in pharmaceutical, food, ceramic and powder metallurgy industries. Understanding how particles pack in a confined space and how powders behave during compaction is crucial for producing high quality products. This paper outlines a new technique, based on modern desktop X-ray tomography and image processing, to quantitatively investigate the packing of particles in the process of powder compaction and provide great insights on how powder densify during powder compaction, which relate in terms of materials properties and processing conditions to tablet manufacture by compaction. A variety of powder systems were considered, which include glass, sugar, NaCl, with a typical particle size of 200-300 mm and binary mixtures of NaCl-Glass Spheres. The results are new and have been validated by SEM observation and numerical simulations using discrete element methods (DEM). The research demonstrates that XMT technique has the potential in further investigating of pharmaceutical processing and even verifying other physical models on complex packing.
Development of hybrid computer plasma models for different pressure regimes
NASA Astrophysics Data System (ADS)
Hromadka, Jakub; Ibehej, Tomas; Hrach, Rudolf
2016-09-01
With increased performance of contemporary computers during last decades numerical simulations became a very powerful tool applicable also in plasma physics research. Plasma is generally an ensemble of mutually interacting particles that is out of the thermodynamic equilibrium and for this reason fluid computer plasma models give results with only limited accuracy. On the other hand, much more precise particle models are often limited only on 2D problems because of their huge demands on the computer resources. Our contribution is devoted to hybrid modelling techniques that combine advantages of both modelling techniques mentioned above, particularly to their so-called iterative version. The study is focused on mutual relations between fluid and particle models that are demonstrated on the calculations of sheath structures of low temperature argon plasma near a cylindrical Langmuir probe for medium and higher pressures. Results of a simple iterative hybrid plasma computer model are also given. The authors acknowledge the support of the Grant Agency of Charles University in Prague (project 220215).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crow, A J
2009-07-07
Andrew Crow arrived at Lawrence Livermore National Laboratory with the intention of continuing work on the Complex Particle Kinetic (CPK) method developed D. Larson and D. Hewett. Andrew Crow had previously worked on duplicating the results of D. Hewett in his previous work. Since arrival, A. Crow has been working with D. Larson on a slightly different project. The current method, still under development, is a Particle in Cell (PIC) code with the following features: (1) all particles begin each timestep at a gridpoint; (2) particles are then advanced in time using a standard special advancement method. The exact methodmore » has not been decided upon, but there are many reliable methods from which to choose. (3) All particles within each cell undergo a simultaneous implicit collision step. This is the current area of focus. Currently, A. Crow is not aware of any method of performing implicit collisions over a large number of charged particles. Implicit methods for charged particle movement and electron-electron collisions, have been developed. The work of L. pareschi and G. Russo on the Time Relaxed Direct Simulation Monte Carlo method, also appears to be a good basis for implicit particle collisions. (4) Each individual particle will be divided into a set of particles with a Gaussian velocity distribution. This will collect some of the thermal effects created by the collisions. This algorithm has not been created. (5) Particles will be projected on to the grid points. Currently, a linear weighting technique is intended to be used, but has not settled upon. (6) Once on the gridpoints the particle number will be reduced using a set of quadrature points based on the third order velocity moments of the particles. The method proposed by R. Fox has been programmed and shown to conserve energy, momentum and mass to machine precision. In addition to reducing the number of particles this method will work to quiet the simulation it will behave as a higher order version of the Quiet DSMC method proposed by B. Albright et al. (7) These quadrature points then become the new particles for the next timestep. the advantage of this method can be many: The self force on ions can be easily removed since all particles begin on grid points. The size of the timesteps should not be limited by collision rate, and should only be impacted by particle travel time through the cell. The particle reduction technique should keep many of the higher order features of the particle distribution while reducing the number of particles in the system. It should also quite the variance in the system. The two largest unknowns, at this time are, how large a part numerical diffusion will play in the scheme and how computationally expensive each timestep will be.« less
Atomistic minimal model for estimating profile of electrodeposited nanopatterns
NASA Astrophysics Data System (ADS)
Asgharpour Hassankiadeh, Somayeh; Sadeghi, Ali
2018-06-01
We develop a computationally efficient and methodologically simple approach to realize molecular dynamics simulations of electrodeposition. Our minimal model takes into account the nontrivial electric field due a sharp electrode tip to perform simulations of the controllable coating of a thin layer on a surface with an atomic precision. On the atomic scale a highly site-selective electrodeposition of ions and charged particles by means of the sharp tip of a scanning probe microscope is possible. A better understanding of the microscopic process, obtained mainly from atomistic simulations, helps us to enhance the quality of this nanopatterning technique and to make it applicable in fabrication of nanowires and nanocontacts. In the limit of screened inter-particle interactions, it is feasible to run very fast simulations of the electrodeposition process within the framework of the proposed model and thus to investigate how the shape of the overlayer depends on the tip-sample geometry and dielectric properties, electrolyte viscosity, etc. Our calculation results reveal that the sharpness of the profile of a nano-scale deposited overlayer is dictated by the normal-to-sample surface component of the electric field underneath the tip.
Improving z-tracking accuracy in the two-photon single-particle tracking microscope.
Liu, C; Liu, Y-L; Perillo, E P; Jiang, N; Dunn, A K; Yeh, H-C
2015-10-12
Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we have precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico . Our method can be generally applied to other 3D single-particle tracking techniques.
Electrophoresis demonstration on Apollo 16
NASA Technical Reports Server (NTRS)
Snyder, R. S.
1972-01-01
Free fluid electrophoresis, a process used to separate particulate species according to surface charge, size, or shape was suggested as a promising technique to utilize the near zero gravity condition of space. Fluid electrophoresis on earth is disturbed by gravity-induced thermal convection and sedimentation. An apparatus was developed to demonstrate the principle and possible problems of electrophoresis on Apollo 14 and the separation boundary between red and blue dye was photographed in space. The basic operating elements of the Apollo 14 unit were used for a second flight demonstration on Apollo 16. Polystyrene latex particles of two different sizes were used to simulate the electrophoresis of large biological particles. The particle bands in space were extremely stable compared to ground operation because convection in the fluid was negligible. Electrophoresis of the polystyrene latex particle groups according to size was accomplished although electro-osmosis in the flight apparatus prevented the clear separation of two particle bands.
Martinez-Pedrero, Fernando; Massana-Cid, Helena; Ziegler, Till; Johansen, Tom H; Straube, Arthur V; Tierno, Pietro
2016-09-29
We demonstrate a size sensitive experimental scheme which enables bidirectional transport and fractionation of paramagnetic colloids in a fluid medium. It is shown that two types of magnetic colloidal particles with different sizes can be simultaneously transported in opposite directions, when deposited above a stripe-patterned ferrite garnet film subjected to a square-wave magnetic modulation. Due to their different sizes, the particles are located at distinct elevations above the surface, and they experience two different energy landscapes, generated by the modulated magnetic substrate. By combining theoretical arguments and numerical simulations, we reveal such energy landscapes, which fully explain the bidirectional transport mechanism. The proposed technique does not require pre-imposed channel geometries such as in conventional microfluidics or lab-on-a-chip systems, and permits remote control over the particle motion, speed and trajectory, by using relatively low intense magnetic fields.
Mass production of shaped particles through vortex ring freezing
NASA Astrophysics Data System (ADS)
An, Duo; Warning, Alex; Yancey, Kenneth G.; Chang, Chun-Ti; Kern, Vanessa R.; Datta, Ashim K.; Steen, Paul H.; Luo, Dan; Ma, Minglin
2016-08-01
A vortex ring is a torus-shaped fluidic vortex. During its formation, the fluid experiences a rich variety of intriguing geometrical intermediates from spherical to toroidal. Here we show that these constantly changing intermediates can be `frozen' at controlled time points into particles with various unusual and unprecedented shapes. These novel vortex ring-derived particles, are mass-produced by employing a simple and inexpensive electrospraying technique, with their sizes well controlled from hundreds of microns to millimetres. Guided further by theoretical analyses and a laminar multiphase fluid flow simulation, we show that this freezing approach is applicable to a broad range of materials from organic polysaccharides to inorganic nanoparticles. We demonstrate the unique advantages of these vortex ring-derived particles in several applications including cell encapsulation, three-dimensional cell culture, and cell-free protein production. Moreover, compartmentalization and ordered-structures composed of these novel particles are all achieved, creating opportunities to engineer more sophisticated hierarchical materials.
Improving z-tracking accuracy in the two-photon single-particle tracking microscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C.; Liu, Y.-L.; Perillo, E. P.
Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we havemore » precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico. Our method can be generally applied to other 3D single-particle tracking techniques.« less
GPU acceleration of Eulerian-Lagrangian particle-laden turbulent flow simulations
NASA Astrophysics Data System (ADS)
Richter, David; Sweet, James; Thain, Douglas
2017-11-01
The Lagrangian point-particle approximation is a popular numerical technique for representing dispersed phases whose properties can substantially deviate from the local fluid. In many cases, particularly in the limit of one-way coupled systems, large numbers of particles are desired; this may be either because many physical particles are present (e.g. LES of an entire cloud), or because the use of many particles increases statistical convergence (e.g. high-order statistics). Solving the trajectories of very large numbers of particles can be problematic in traditional MPI implementations, however, and this study reports the benefits of using graphical processing units (GPUs) to integrate the particle equations of motion while preserving the original MPI version of the Eulerian flow solver. It is found that GPU acceleration becomes cost effective around one million particles, and performance enhancements of up to 15x can be achieved when O(108) particles are computed on the GPU rather than the CPU cluster. Optimizations and limitations will be discussed, as will prospects for expanding to two- and four-way coupled systems. ONR Grant No. N00014-16-1-2472.
3D finite element modelling of force transmission and particle fracture of sand
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imseeh, Wadi H.; Alshibli, Khalid A.
Global compressive loading of granular media causes rearrangements of particles into a denser configuration. Under 1D compression, researchers observed that particles initially translate and rotate which lead to more contacts between particles and the development of force chains to resist applied loads. Particles within force chains resist most of the applied loads while neighbor particles provide lateral support to prevent particles within force chains from buckling. Several experimental and numerical models have been proposed in the literature to characterize force chains within granular materials. This paper presents a 3D finite element (FE) model that simulates 1D compression experiment on F-75more » Ottawa sand. The FE mesh of particles closely matched 3D physical shape of sand particles that were acquired using 3D synchrotron micro-computed tomography (SMT) technique. The paper presents a quantitative assessment of the model, in which evolution of force chains, fracture modes, and stress-strain relationships showed an excellent agreement with experimental measurements reported by Cil et al. Alshibli (2017).« less
Decoupling the Roles of Inertia and Gravity on Particle Dispersion
NASA Technical Reports Server (NTRS)
Groszmann, D. E.; Thompson, J. H.; Coppen, S. W.; Rogers, C. B.
1999-01-01
Inertial and gravitational forces determine a particle's motion in a turbulent flow field. Gravity plays the dominant role in this motion by pulling the particles through adjacent regions of fluid turbulence. To better understand and model how a particle's inertia effects its displacement, one must examine the dispersion in a turbulent flow in the absence of gravity. In this paper, we present the particle experiments planned for NASA's KC-135 Reduced-Gravity Aircraft, which generates microgravity conditions for about 20 seconds. We also predict the particle behavior using simulation and ground-based experiments. We will release particles with Stokes numbers of 0.1, 1, and 10 into an enclosed tank of near-isotropic, stationary, and homogenous turbulence. These particle Stoke numbers cover a broad range of flow regimes of interest. Two opposed grids oscillating back and forth generate the turbulent field in the tank with a range of turbulence scales that covers about three orders of magnitude and with turbulence intensities of about ten times the mean velocity. The motion of the particles will be tracked using a stereo image velocimetry technique.
Detection of small metal particles by a quasi-optical system at sub-millimeter wavelength
NASA Astrophysics Data System (ADS)
Kitahara, Yasuyuki; Domier, C. W.; Ikeda, Makoto; Pham, Anh-Vu; Luhmann, Neville C.
2016-04-01
Inspection of alien metal particles in electronic materials such as glass fibers and resins is a critical issue to control the quality and guarantee the safety of products. In this paper, we present a new detection technique using sub-millimeter wave for films as electric materials in product lines. The advantage of using sub-millimeter wave frequency is that it is easy to distinguish conductive particles from a nonconductive material such as plastic films. Scattering of a submillimeter wave by a metal particle is used as the detection principle. By simulation, it is observed that the scattering pattern varies intricately as the diameter varies from 10 to 700 μm at 300 GHz. The demonstration system is composed of a Keysight performance network analyzer (N5247A PNA-X) with 150-330 GHz VDI extension modules, transmitting and receiving antennas, and focusing dielectric lens. An output signal is radiated via an antenna and focused onto a metal particle on a film. The wave scattered by the metal particle is detected by an identical antenna through a lens. The signal scattered from a metal particle is evaluated from the insertion loss between antennas (S21). The result shows that a particle of diameter 300 μm is detectable at 150-330 GHz through S21 in the experimental system that we prepared. Peaks calculated in simulation were detected in experimental data as well as in the curves of the particle diameter versus S21. It was shown that using this peak frequency could improve S21 level without higher frequency.
Karim, Mir; Indei, Tsutomu; Schieber, Jay D; Khare, Rajesh
2016-01-01
Particle rheology is used to extract the linear viscoelastic properties of an entangled polymer melt from molecular dynamics simulations. The motion of a stiff, approximately spherical particle is tracked in both passive and active modes. We demonstrate that the dynamic modulus of the melt can be extracted under certain limitations using this technique. As shown before for unentangled chains [Karim et al., Phys. Rev. E 86, 051501 (2012)PLEEE81539-375510.1103/PhysRevE.86.051501], the frequency range of applicability is substantially expanded when both particle and medium inertia are properly accounted for by using our inertial version of the generalized Stokes-Einstein relation (IGSER). The system used here introduces an entanglement length d_{T}, in addition to those length scales already relevant: monomer bead size d, probe size R, polymer radius of gyration R_{g}, simulation box size L, shear wave penetration length Δ, and wave period Λ. Previously, we demonstrated a number of restrictions necessary to obtain the relevant fluid properties: continuum approximation breaks down when d≳Λ; medium inertia is important and IGSER is required when R≳Λ; and the probe should not experience hydrodynamic interaction with its periodic images, L≳Δ. These restrictions are also observed here. A simple scaling argument for entangled polymers shows that the simulation box size must scale with polymer molecular weight as M_{w}^{3}. Continuum analysis requires the existence of an added mass to the probe particle from the entrained medium but was not observed in the earlier work for unentangled chains. We confirm here that this added mass is necessary only when the thickness L_{S} of the shell around the particle that contains the added mass, L_{S}>d. We also demonstrate that the IGSER can be used to predict particle displacement over a given timescale from knowledge of medium viscoelasticity; such ability will be of interest for designing nanoparticle-based drug delivery.
NASA Astrophysics Data System (ADS)
Karim, Mir; Indei, Tsutomu; Schieber, Jay D.; Khare, Rajesh
2016-01-01
Particle rheology is used to extract the linear viscoelastic properties of an entangled polymer melt from molecular dynamics simulations. The motion of a stiff, approximately spherical particle is tracked in both passive and active modes. We demonstrate that the dynamic modulus of the melt can be extracted under certain limitations using this technique. As shown before for unentangled chains [Karim et al., Phys. Rev. E 86, 051501 (2012), 10.1103/PhysRevE.86.051501], the frequency range of applicability is substantially expanded when both particle and medium inertia are properly accounted for by using our inertial version of the generalized Stokes-Einstein relation (IGSER). The system used here introduces an entanglement length dT, in addition to those length scales already relevant: monomer bead size d , probe size R , polymer radius of gyration Rg, simulation box size L , shear wave penetration length Δ , and wave period Λ . Previously, we demonstrated a number of restrictions necessary to obtain the relevant fluid properties: continuum approximation breaks down when d ≳Λ ; medium inertia is important and IGSER is required when R ≳Λ ; and the probe should not experience hydrodynamic interaction with its periodic images, L ≳Δ . These restrictions are also observed here. A simple scaling argument for entangled polymers shows that the simulation box size must scale with polymer molecular weight as Mw3. Continuum analysis requires the existence of an added mass to the probe particle from the entrained medium but was not observed in the earlier work for unentangled chains. We confirm here that this added mass is necessary only when the thickness LS of the shell around the particle that contains the added mass, LS>d . We also demonstrate that the IGSER can be used to predict particle displacement over a given timescale from knowledge of medium viscoelasticity; such ability will be of interest for designing nanoparticle-based drug delivery.
Impact gages for detecting meteoroid and other orbital debris impacts on space vehicles.
NASA Technical Reports Server (NTRS)
Mastandrea, J. R.; Scherb, M. V.
1973-01-01
Impacts on space vehicles have been simulated using the McDonnell Douglas Aerophysics Laboratory (MDAL) Light-Gas Guns to launch particles at hypervelocity speeds into scaled space structures. Using impact gages and a triangulation technique, these impacts have been detected and accurately located. This paper describes in detail the various types of impact gages (piezoelectric PZT-5A, quartz, electret, and off-the-shelf plastics) used. This description includes gage design and experimental results for gages installed on single-walled scaled payload carriers, multiple-walled satellites and space stations, and single-walled full-scale Delta tank structures. A brief description of the triangulation technique, the impact simulation, and the data acquisition system are also included.
Brand, P; Havlicek, P; Steiners, M; Holzinger, K; Reisgen, U; Kraus, T; Gube, M
2013-01-01
Studies concerning welding fume-related adverse health effects in welders are hampered by the heterogeneity of workplace situations, resulting in complex and non-standardized exposure conditions. In order to carry out welding fume exposure studies under controlled and standardized conditions, the Aachen Workplace Simulation Laboratory was developed. This laboratory consists of an emission room, in which welding fume is produced, and an exposure room in which human subjects are exposed to these fumes. Both rooms are connected by a ventilation system which allows the welding fume concentration to be regulated. Particle mass concentration was measured with a TEOM microbalance and the particle number-size distribution using a Grimm SMPS device. In a study, which is the subject of this paper, it has been shown that welding fume concentration can easily be regulated between 1 and about 3 mg m(-3). The chosen concentration can be kept constant for more than 8 h. However, transport of the particles from the emission room into the exposure room leads to a change in particle size distribution, which is probably due to coagulation of the fraction of smallest particles. The Aachen Workplace Simulation Laboratory is suitable for controlled exposure studies with human subjects.
Computer simulation of surface and film processes
NASA Technical Reports Server (NTRS)
Tiller, W. A.; Halicioglu, M. T.
1984-01-01
All the investigations which were performed employed in one way or another a computer simulation technique based on atomistic level considerations. In general, three types of simulation methods were used for modeling systems with discrete particles that interact via well defined potential functions: molecular dynamics (a general method for solving the classical equations of motion of a model system); Monte Carlo (the use of Markov chain ensemble averaging technique to model equilibrium properties of a system); and molecular statics (provides properties of a system at T = 0 K). The effects of three-body forces on the vibrational frequencies of triatomic cluster were investigated. The multilayer relaxation phenomena for low index planes of an fcc crystal was analyzed also as a function of the three-body interactions. Various surface properties for Si and SiC system were calculated. Results obtained from static simulation calculations for slip formation were presented. The more elaborate molecular dynamics calculations on the propagation of cracks in two-dimensional systems were outlined.
Cooper, Justin T; Peterson, Eric M; Harris, Joel M
2013-10-01
Due to its high specific surface area and chemical stability, porous silica is used as a support structure in numerous applications, including heterogeneous catalysis, biomolecule immobilization, sensors, and liquid chromatography. Reversed-phase liquid chromatography (RPLC), which uses porous silica support particles, has become an indispensable separations tool in quality control, pharmaceutics, and environmental analysis requiring identification of compounds in mixtures. For complex samples, the need for higher resolution separations requires an understanding of the time scale of processes responsible for analyte retention in the stationary phase. In the present work, single-molecule fluorescence imaging is used to observe transport of individual molecules within RPLC porous silica particles. This technique allows direct measurement of intraparticle molecular residence times, intraparticle diffusion rates, and the spatial distribution of molecules within the particle. On the basis of the localization uncertainty and characteristic measured diffusion rates, statistical criteria were developed to resolve the frame-to-frame behavior of molecules into moving and stuck events. The measured diffusion coefficient of moving molecules was used in a Monte Carlo simulation of a random-walk model within the cylindrical geometry of the particle diameter and microscope depth-of-field. The simulated molecular transport is in good agreement with the experimental data, indicating transport of moving molecules in the porous particle is described by a random-walk. Histograms of stuck-molecule event times, locations, and their contributions to intraparticle residence times were also characterized.
Material point method modeling in oil and gas reservoirs
Vanderheyden, William Brian; Zhang, Duan
2016-06-28
A computer system and method of simulating the behavior of an oil and gas reservoir including changes in the margins of frangible solids. A system of equations including state equations such as momentum, and conservation laws such as mass conservation and volume fraction continuity, are defined and discretized for at least two phases in a modeled volume, one of which corresponds to frangible material. A material point model technique for numerically solving the system of discretized equations, to derive fluid flow at each of a plurality of mesh nodes in the modeled volume, and the velocity of at each of a plurality of particles representing the frangible material in the modeled volume. A time-splitting technique improves the computational efficiency of the simulation while maintaining accuracy on the deformation scale. The method can be applied to derive accurate upscaled model equations for larger volume scale simulations.
Zhang, Yu; Mukamel, Shaul; Khalil, Munira; Govind, Niranjan
2015-12-08
Valence-to-core (VtC) X-ray emission spectroscopy (XES) has emerged as a powerful technique for the structural characterization of complex organometallic compounds in realistic environments. Since the spectrum represents electronic transitions from the ligand molecular orbitals to the core holes of the metal centers, the approach is more chemically sensitive to the metal-ligand bonding character compared with conventional X-ray absorption techniques. In this paper we study how linear-response time-dependent density functional theory (LR-TDDFT) can be harnessed to simulate K-edge VtC X-ray emission spectra reliably. LR-TDDFT allows one to go beyond the single-particle picture that has been extensively used to simulate VtC-XES. We consider seven low- and high-spin model complexes involving chromium, manganese, and iron transition metal centers. Our results are in good agreement with experiment.
a Study of Dynamic Powder Consolidation Based on a Particle-Level Mathematical Model.
NASA Astrophysics Data System (ADS)
Williamson, Richard L.
A mathematical model is developed to investigate the effects of large amplitude shock waves on powder materials during dynamic consolidation. The model is constructed at the particle level, focusing on a region containing a few powder particles and the surrounding interstices. The general equations of continuum mechanics are solved over this region, using initial and boundary conditions appropriate for the consolidation process. Closure of the equation system is obtained using an analytical equation of state; relations are included to account for solid to liquid phase changes. An elastic, perfectly-plastic constitutive law, specifically modified to describe material behavior at high-strain-rates, is applied to the solid materials. To reduce complexity, the model is restricted to two dimensions, therefore individual particles are approximated as infinitely long cylinders rather than spheres. The equation system is solved using standard finite-difference numerical techniques. It is demonstrated that for typical consolidation conditions, energy diffusion mechanisms are insignificant during the rapid densification phase of consolidation. Using type 304 stainless steel powder material, the particle-level model is used to investigate the mechanisms responsible for particle surface heating and metallurgical bonding during consolidation. It is demonstrated that energy deposition near particle surfaces results both from rapid particle deformation during interstitial filling and large localized impacts occurring at the final instant of interstitial closure; particle interior regions remain at sufficiently low temperatures to avoid microstructural modification. Nonuniform metallurgical bonding is predicted around the particle periphery, ranging from complete fusion to mechanical abutment. Simulation results are used to investigate the detailed wave propagation phenomena at the particle level, providing an improved understanding of this complex behavior. A variety of parametric studies are conducted including investigations of the effects of stress wave amplitude and rise time, the role of interstitial gases during consolidation, and various geometric aspects including the importance of initial void fraction. The model is applied to a metal matrix composite system to investigate the consolidation of mixtures of differing materials; results of a two-dimensional experiment are included. Available experimental data are compared with simulation results. In general, very good agreement between simulation results and data is obtained.
SPH modelling of depth-limited turbulent open channel flows over rough boundaries.
Kazemi, Ehsan; Nichols, Andrew; Tait, Simon; Shao, Songdong
2017-01-10
A numerical model based on the smoothed particle hydrodynamics method is developed to simulate depth-limited turbulent open channel flows over hydraulically rough beds. The 2D Lagrangian form of the Navier-Stokes equations is solved, in which a drag-based formulation is used based on an effective roughness zone near the bed to account for the roughness effect of bed spheres and an improved sub-particle-scale model is applied to account for the effect of turbulence. The sub-particle-scale model is constructed based on the mixing-length assumption rather than the standard Smagorinsky approach to compute the eddy-viscosity. A robust in/out-flow boundary technique is also proposed to achieve stable uniform flow conditions at the inlet and outlet boundaries where the flow characteristics are unknown. The model is applied to simulate uniform open channel flows over a rough bed composed of regular spheres and validated by experimental velocity data. To investigate the influence of the bed roughness on different flow conditions, data from 12 experimental tests with different bed slopes and uniform water depths are simulated, and a good agreement has been observed between the model and experimental results of the streamwise velocity and turbulent shear stress. This shows that both the roughness effect and flow turbulence should be addressed in order to simulate the correct mechanisms of turbulent flow over a rough bed boundary and that the presented smoothed particle hydrodynamics model accomplishes this successfully. © 2016 The Authors International Journal for Numerical Methods in Fluids Published by John Wiley & Sons Ltd.
Electrolysis Bubbles Make Waterflow Visible
NASA Technical Reports Server (NTRS)
Schultz, Donald F.
1990-01-01
Technique for visualization of three-dimensional flow uses tiny tracer bubbles of hydrogen and oxygen made by electrolysis of water. Strobe-light photography used to capture flow patterns, yielding permanent record that is measured to obtain velocities of particles. Used to measure simulated mixing turbulence in proposed gas-turbine combustor and also used in other water-table flow tests.
Sato, Tatsuhiko; Watanabe, Ritsuko; Niita, Koji
2006-01-01
Estimation of the specific energy distribution in a human body exposed to complex radiation fields is of great importance in the planning of long-term space missions and heavy ion cancer therapies. With the aim of developing a tool for this estimation, the specific energy distributions in liquid water around the tracks of several HZE particles with energies up to 100 GeV n(-1) were calculated by performing track structure simulation with the Monte Carlo technique. In the simulation, the targets were assumed to be spherical sites with diameters from 1 nm to 1 microm. An analytical function to reproduce the simulation results was developed in order to predict the distributions of all kinds of heavy ions over a wide energy range. The incorporation of this function into the Particle and Heavy Ion Transport code System (PHITS) enables us to calculate the specific energy distributions in complex radiation fields in a short computational time.
Numerical simulation of a full-loop circulating fluidized bed under different operating conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Yupeng; Musser, Jordan M.; Li, Tingwen
Both experimental and computational studies of the fluidization of high-density polyethylene (HDPE) particles in a small-scale full-loop circulating fluidized bed are conducted. Experimental measurements of pressure drop are taken at different locations along the bed. The solids circulation rate is measured with an advanced Particle Image Velocimetry (PIV) technique. The bed height of the quasi-static region in the standpipe is also measured. Comparative numerical simulations are performed with a Computational Fluid Dynamics solver utilizing a Discrete Element Method (CFD-DEM). This paper reports a detailed and direct comparison between CFD-DEM results and experimental data for realistic gas-solid fluidization in a full-loopmore » circulating fluidized bed system. The comparison reveals good agreement with respect to system component pressure drop and inventory height in the standpipe. In addition, the effect of different drag laws applied within the CFD simulation is examined and compared with experimental results.« less
NASA Astrophysics Data System (ADS)
Sang, Chaofeng; Sun, Jizhong; Ren, Chunsheng; Wang, Dezhen
2009-02-01
A model of one dimensional in position and three dimensional in velocity space self-consistent particle in cell with Monte Carlo collision technique was employed to simulate the argon discharge between the needle and plane electrodes at high pressure, in which a nanosecond rectangular pulse was applied to the needle electrode. The work focused on the investigation of the spatiotemporal evolution of the discharge versus the needle tip size and working gas pressure. The simulation results showed that the discharge occurred mainly in the region near the needle tip at atmospheric pressure, and that the small radius of the needle tip led to easy discharge. Reducing the gas pressure gave rise to a transition from a corona discharge to a glowlike discharge along the needle-to-plane direction. The microscopic mechanism for the transition can arguably be attributed to the peak of high-energy electrons occurring before the breakdown; the magnitude of the number of these electrons determined whether the breakdown can take place.
Gyrokinetic simulation of ITG modes in a three-mode coupling model
NASA Astrophysics Data System (ADS)
Jenkins, Thomas G.; Lee, W. W.
2004-11-01
A three-mode coupling model of ITG modes with adiabatic electrons is studied both analytically and numerically in 2-dimensional slab geometry using the gyrokinetic formalism. It can be shown analytically that the (quasilinear) saturation amplitude of the waves in the system should be enhanced by the inclusion of the parallel velocity nonlinearity in the governing gyrokinetic equation. The effect of this (frequently neglected) nonlinearity on the steady-state transport properties of the plasma is studied numerically using standard gyrokinetic particle simulation techniques. The balance [1] between various steady-state transport properties of the model (particle and heat flux, entropy production, and collisional dissipation) is examined. Effects resulting from the inclusion of nonadiabatic electrons in the model are also considered numerically, making use of the gyrokinetic split-weight scheme [2] in the simulations. [1] W. W. Lee and W. M. Tang, Phys. Fluids 31, 612 (1988). [2] I. Manuilskiy and W. W. Lee, Phys. Plasmas 7, 1381 (2000).
NASA Astrophysics Data System (ADS)
Thakur, Siddharth; Neal, Chris; Mehta, Yash; Sridharan, Prasanth; Jackson, Thomas; Balachandar, S.
2017-01-01
Micrsoscale simulations are being conducted for developing point-particle and other related models that are needed for the mesoscale and macroscale simulations of explosive dispersal of particles. These particle models are required to compute (a) instantaneous aerodynamic force on the particle and (b) instantaneous net heat transfer between the particle and the surrounding. A strategy for a sequence of microscale simulations has been devised that allows systematic development of the hybrid surrogate models that are applicable at conditions representative of the explosive dispersal application. The ongoing microscale simulations seek to examine particle force dependence on: (a) Mach number, (b) Reynolds number, and (c) volume fraction (different particle arrangements such as cubic, face-centered cubic (FCC), body-centered cubic (BCC) and random). Future plans include investigation of sequences of fully-resolved microscale simulations consisting of an array of particles subjected to more realistic time-dependent flows that progressively better approximate the actual problem of explosive dispersal. Additionally, effects of particle shape, size, and number in simulation as well as the transient particle deformation dependence on various parameters including: (a) particle material, (b) medium material, (c) multiple particles, (d) incoming shock pressure and speed, (e) medium to particle impedance ratio, (f) particle shape and orientation to shock, etc. are being investigated.
Misut, Paul
2014-01-01
A three-dimensional groundwater-flow model is coupled with the particle-tracking program MODPATH to delineate zones of contribution to wells pumping from the Magothy aquifer and supplying water to a chlorinated volatile organic compound removal plant at site GM–38, Naval Weapons Industrial Reserve Plant, Bethpage, New York. By use of driller’s logs, a transitional probability approach generated three alternative realizations of heterogeneity within the Magothy aquifer to assess uncertainty in model representation. Finer-grained sediments with low hydraulic conductivity were realized as laterally discontinuous, thickening towards the south, and comprising about 17 percent of the total aquifer volume. Particle-tracking evaluations of a steady state present conditions model with alternative heterogeneity realizations were used to develop zones of contribution of remedial pumping wells. Because of heterogeneity and high rates of advection within the coarse-grained sediments, transport by dispersion and (or) diffusion was assumed to be negligible. Resulting zones of contribution of existing remedial wells are complex shapes, influenced by heterogeneity of each realization and other nearby hydrologic stresses. The use of two particle tracking techniques helped identify zones of contribution to wells. Backtracking techniques and observations of points of intersection of backward-tracked particles at shells of the GM–38 Hot Spot, as defined by surfaces of equal total volatile organic compound concentration, identified the source of water within the GM–38 Hot Spot to simulated wells. Forward-tracking techniques identified the fate of water within the GM–38 Hot Spot, including well capture and discharge to model constant head and drain boundaries. The percentage of backward-tracked particles, started at GM–38 wells that were sourced from within the Hot Spot, varied from 72.0 to 98.2, depending on the Hot Spot delineation used (present steady state model and Magothy aquifer heterogeneity realization A). The percentage of forward-tracked particles that were captured by GM–38 wells varied from 81.1 to 94.6, depending on the Hot Spot delineation used, with the remainder primarily captured by Bethpage Water District Plant 4 production wells (present steady state model and Magothy aquifer heterogeneity realization A). Less than 1 percent of forward-tracked particles ultimately discharge at model constant head and drain boundaries. The differences between forward- and backward-tracked particle percentage ranges are due to some forward-tracked particles not being captured by GM–38 wells, and some backward-tracked particles not intersecting specific regions of the Hot Spot. During 2013, an aquifer test generated detailed time series of well pumping rates and corresponding water-level responses were recorded at numerous locations. These data were used to verify the present conditions steady state model and demonstrate the sensitivity of model results to transient-state changes.
NASA Technical Reports Server (NTRS)
Mackay, N. G.; Green, S. F.; Gardner, D. J.; Mcdonnell, J. A. M.
1995-01-01
Interpretation of the wealth of impact data available from the Long Duration Exposure Facility, in terms of the absolute and relative populations of space debris and natural micrometeoroids, requires three dimensional models of the distribution of impact directions, velocities and masses of such particles, as well as understanding of the impact processes. Although the stabilized orbit of LDEF provides limited directional information, it is possible to determine more accurate impact directions from detailed crater morphology. The applicability of this technique has already been demonstrated but the relationship between crater shape and impactor direction and velocity has not been derived in detail. We present the results of impact experiments and simulations: (1) impacts at micron dimensions using the Unit's 2MV Van de Graaff accelerator; (2) impacts at mm dimensions using a Light Gas Gun; and (3) computer simulations using AUTODYN-3D from which an empirical relationship between crater shape and impactor velocity, direction and particle properties we aim to derive. Such a relationship can be applied to any surface exposed to space debris or micrometeoroid particles for which a detailed pointing history is available.
NASA Astrophysics Data System (ADS)
Pham, Ngoc; Papavassiliou, Dimitrios
2014-03-01
In this study, transport behavior of nanoparticles under different pore surface conditions of consolidated Berea sandstone is numerically investigated. Micro-CT scanning technique is applied to obtain 3D grayscale images of the rock sample geometry. Quantitative characterization, which is based on image analysis is done to obtain physical properties of the pore network, such as the pore size distribution and the type of each pore (dead-end, isolated, and fully connected pore). Transport of water through the rock is simulated by employing a 3D lattice Boltzmann method. The trajectories of nanopaticles moving under convection in the simulated flow field and due to molecular diffusion are monitored in the Lagrangian framework. It is assumed in the model that the particle adsorption on the pore surface, which is modeled as a pseudo-first order adsorption, is the only factor hindering particle propagation. The effect of pore surface heterogeneity to the particle breakthrough is considered, and the role of particle radial diffusion is also addressed in details. The financial support of the Advanced Energy Consortium (AEC BEG08-022) and the computational support of XSEDE (CTS090017) are acknowledged.
Spectral Kinetic Simulation of the Ideal Multipole Resonance Probe
NASA Astrophysics Data System (ADS)
Gong, Junbo; Wilczek, Sebastian; Szeremley, Daniel; Oberrath, Jens; Eremin, Denis; Dobrygin, Wladislaw; Schilling, Christian; Friedrichs, Michael; Brinkmann, Ralf Peter
2015-09-01
The term Active Plasma Resonance Spectroscopy (APRS) denotes a class of diagnostic techniques which utilize the natural ability of plasmas to resonate on or near the electron plasma frequency ωpe: An RF signal in the GHz range is coupled into the plasma via an electric probe; the spectral response of the plasma is recorded, and a mathematical model is used to determine plasma parameters such as the electron density ne or the electron temperature Te. One particular realization of the method is the Multipole Resonance Probe (MRP). The ideal MRP is a geometrically simplified version of that probe; it consists of two dielectrically shielded, hemispherical electrodes to which the RF signal is applied. A particle-based numerical algorithm is described which enables a kinetic simulation of the interaction of the probe with the plasma. Similar to the well-known particle-in-cell (PIC), it contains of two modules, a particle pusher and a field solver. The Poisson solver determines, with the help of a truncated expansion into spherical harmonics, the new electric field at each particle position directly without invoking a numerical grid. The effort of the scheme scales linearly with the ensemble size N.
NASA Astrophysics Data System (ADS)
Zhao, Yinjian
2017-09-01
Aiming at a high simulation accuracy, a Particle-Particle (PP) Coulombic molecular dynamics model is implemented to study the electron-ion temperature relaxation. In this model, the Coulomb's law is directly applied in a bounded system with two cutoffs at both short and long length scales. By increasing the range between the two cutoffs, it is found that the relaxation rate deviates from the BPS theory and approaches the LS theory and the GMS theory. Also, the effective minimum and maximum impact parameters (bmin* and bmax*) are obtained. For the simulated plasma condition, bmin* is about 6.352 times smaller than the Landau length (bC), and bmax* is about 2 times larger than the Debye length (λD), where bC and λD are used in the LS theory. Surprisingly, the effective relaxation time obtained from the PP model is very close to the LS theory and the GMS theory, even though the effective Coulomb logarithm is two times greater than the one used in the LS theory. Besides, this work shows that the PP model (commonly known as computationally expensive) is becoming practicable via GPU parallel computing techniques.
Finite-size corrections in simulation of dipolar fluids
NASA Astrophysics Data System (ADS)
Belloni, Luc; Puibasset, Joël
2017-12-01
Monte Carlo simulations of dipolar fluids are performed at different numbers of particles N = 100-4000. For each size of the cubic cell, the non-spherically symmetric pair distribution function g(r,Ω) is accumulated in terms of projections gmnl(r) onto rotational invariants. The observed N dependence is in very good agreement with the theoretical predictions for the finite-size corrections of different origins: the explicit corrections due to the absence of fluctuations in the number of particles within the canonical simulation and the implicit corrections due to the coupling between the environment around a given particle and that around its images in the neighboring cells. The latter dominates in fluids of strong dipolar coupling characterized by low compressibility and high dielectric constant. The ability to clean with great precision the simulation data from these corrections combined with the use of very powerful anisotropic integral equation techniques means that exact correlation functions both in real and Fourier spaces, Kirkwood-Buff integrals, and bridge functions can be derived from box sizes as small as N ≈ 100, even with existing long-range tails. In the presence of dielectric discontinuity with the external medium surrounding the central box and its replica within the Ewald treatment of the Coulombic interactions, the 1/N dependence of the gmnl(r) is shown to disagree with the, yet well-accepted, prediction of the literature.
Camplani, M; Malizia, A; Gelfusa, M; Barbato, F; Antonelli, L; Poggi, L A; Ciparisse, J F; Salgado, L; Richetta, M; Gaudio, P
2016-01-01
In this paper, a preliminary shadowgraph-based analysis of dust particles re-suspension due to loss of vacuum accident (LOVA) in ITER-like nuclear fusion reactors has been presented. Dust particles are produced through different mechanisms in nuclear fusion devices, one of the main issues is that dust particles are capable of being re-suspended in case of events such as LOVA. Shadowgraph is based on an expanded collimated beam of light emitted by a laser or a lamp that emits light transversely compared to the flow field direction. In the STARDUST facility, the dust moves in the flow, and it causes variations of refractive index that can be detected by using a CCD camera. The STARDUST fast camera setup allows to detect and to track dust particles moving in the vessel and then to obtain information about the velocity field of dust mobilized. In particular, the acquired images are processed such that per each frame the moving dust particles are detected by applying a background subtraction technique based on the mixture of Gaussian algorithm. The obtained foreground masks are eventually filtered with morphological operations. Finally, a multi-object tracking algorithm is used to track the detected particles along the experiment. For each particle, a Kalman filter-based tracker is applied; the particles dynamic is described by taking into account position, velocity, and acceleration as state variable. The results demonstrate that it is possible to obtain dust particles' velocity field during LOVA by automatically processing the data obtained with the shadowgraph approach.
NASA Astrophysics Data System (ADS)
Camplani, M.; Malizia, A.; Gelfusa, M.; Barbato, F.; Antonelli, L.; Poggi, L. A.; Ciparisse, J. F.; Salgado, L.; Richetta, M.; Gaudio, P.
2016-01-01
In this paper, a preliminary shadowgraph-based analysis of dust particles re-suspension due to loss of vacuum accident (LOVA) in ITER-like nuclear fusion reactors has been presented. Dust particles are produced through different mechanisms in nuclear fusion devices, one of the main issues is that dust particles are capable of being re-suspended in case of events such as LOVA. Shadowgraph is based on an expanded collimated beam of light emitted by a laser or a lamp that emits light transversely compared to the flow field direction. In the STARDUST facility, the dust moves in the flow, and it causes variations of refractive index that can be detected by using a CCD camera. The STARDUST fast camera setup allows to detect and to track dust particles moving in the vessel and then to obtain information about the velocity field of dust mobilized. In particular, the acquired images are processed such that per each frame the moving dust particles are detected by applying a background subtraction technique based on the mixture of Gaussian algorithm. The obtained foreground masks are eventually filtered with morphological operations. Finally, a multi-object tracking algorithm is used to track the detected particles along the experiment. For each particle, a Kalman filter-based tracker is applied; the particles dynamic is described by taking into account position, velocity, and acceleration as state variable. The results demonstrate that it is possible to obtain dust particles' velocity field during LOVA by automatically processing the data obtained with the shadowgraph approach.
NASA Astrophysics Data System (ADS)
Piñero, G.; Vergara, L.; Desantes, J. M.; Broatch, A.
2000-11-01
The knowledge of the particle velocity fluctuations associated with acoustic pressure oscillation in the exhaust system of internal combustion engines may represent a powerful aid in the design of such systems, from the point of view of both engine performance improvement and exhaust noise abatement. However, usual velocity measurement techniques, even if applicable, are not well suited to the aggressive environment existing in exhaust systems. In this paper, a method to obtain a suitable estimate of velocity fluctuations is proposed, which is based on the application of spatial filtering (beamforming) techniques to instantaneous pressure measurements. Making use of simulated pressure-time histories, several algorithms have been checked by comparison between the simulated and the estimated velocity fluctuations. Then, problems related to the experimental procedure and associated with the proposed methodology are addressed, making application to measurements made in a real exhaust system. The results indicate that, if proper care is taken when performing the measurements, the application of beamforming techniques gives a reasonable estimate of the velocity fluctuations.
Nayhouse, Michael; Kwon, Joseph Sang-Il; Orkoulas, G
2012-05-28
In simulation studies of fluid-solid transitions, the solid phase is usually modeled as a constrained system in which each particle is confined to move in a single Wigner-Seitz cell. The constrained cell model has been used in the determination of fluid-solid coexistence via thermodynamic integration and other techniques. In the present work, the phase diagram of such a constrained system of Lennard-Jones particles is determined from constant-pressure simulations. The pressure-density isotherms exhibit inflection points which are interpreted as the mechanical stability limit of the solid phase. The phase diagram of the constrained system contains a critical and a triple point. The temperature and pressure at the critical and the triple point are both higher than those of the unconstrained system due to the reduction in the entropy caused by the single occupancy constraint.
Simulation of Martian dust accumulation on surfaces
NASA Technical Reports Server (NTRS)
Perez-Davis, Marla E.; Gaier, James R.; Kress, Robert; Grimalda, Justus
1990-01-01
Future NASA space missions include the possibility of manned landings and exploration of Mars. Environmental and operational constraints unique to Mars must be considered when selecting and designing the power system to be used on the Mars surface. A technique is described which was developed to simulate the deposition of dust on surfaces. Three kinds of dust materials were studied: aluminum oxide, basalt, and iron oxide. The apparatus was designed using the Stokes and Stokes-Cunningham law for particle fallout, with additional consideration given to particle size and shape. Characterization of the resulting dust films on silicon dioxide, polytetrafluoroethylene, indium tin oxide, diamondlike carbon, and other surfaces are discussed based on optical transmittance measurements. The results of these experiments will guide future studies which will consider processes to remove the dust from surfaces under Martian environmental conditions.
Model for the anisotropic reentry of albedo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koenig, P.J.
1981-02-01
The trajectory-tracing technique was used to obtain the angles of incidence, and hence 'intensities,' of negatively charged 0.88-GV particles reentrant at Palestine, Texas. Splash albedo trajectories were traced from the conjugate point, and also from Palestine itself, for those trajectories that were unable to complete a full gyration before reentry into the shadow cone at Palestine. Both isotropic and anisotropic ejection configurations were used at these two locations. These simulations predict a north-south anisotropy (hence also a zenithal anisotropy) for reentrant albedo, with a dearth of trajectories incident from the south. The anisotropy is large enough to explain experimentally determinedmore » north-south anisotropies for lower-energy particles, as observed by other groups in the Northern Hemisphere. The results are in agreement with measurements and simulations previously obtained in the Southern Hemisphere.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batista, Rafael Alves; Dundovic, Andrej; Sigl, Guenter
2016-05-01
We present the simulation framework CRPropa version 3 designed for efficient development of astrophysical predictions for ultra-high energy particles. Users can assemble modules of the most relevant propagation effects in galactic and extragalactic space, include their own physics modules with new features, and receive on output primary and secondary cosmic messengers including nuclei, neutrinos and photons. In extension to the propagation physics contained in a previous CRPropa version, the new version facilitates high-performance computing and comprises new physical features such as an interface for galactic propagation using lensing techniques, an improved photonuclear interaction calculation, and propagation in time dependent environmentsmore » to take into account cosmic evolution effects in anisotropy studies and variable sources. First applications using highlighted features are presented as well.« less
A hybrid experimental-numerical technique for determining 3D velocity fields from planar 2D PIV data
NASA Astrophysics Data System (ADS)
Eden, A.; Sigurdson, M.; Mezić, I.; Meinhart, C. D.
2016-09-01
Knowledge of 3D, three component velocity fields is central to the understanding and development of effective microfluidic devices for lab-on-chip mixing applications. In this paper we present a hybrid experimental-numerical method for the generation of 3D flow information from 2D particle image velocimetry (PIV) experimental data and finite element simulations of an alternating current electrothermal (ACET) micromixer. A numerical least-squares optimization algorithm is applied to a theory-based 3D multiphysics simulation in conjunction with 2D PIV data to generate an improved estimation of the steady state velocity field. This 3D velocity field can be used to assess mixing phenomena more accurately than would be possible through simulation alone. Our technique can also be used to estimate uncertain quantities in experimental situations by fitting the gathered field data to a simulated physical model. The optimization algorithm reduced the root-mean-squared difference between the experimental and simulated velocity fields in the target region by more than a factor of 4, resulting in an average error less than 12% of the average velocity magnitude.
NASA Astrophysics Data System (ADS)
Kreis, Karsten; Kremer, Kurt; Potestio, Raffaello; Tuckerman, Mark E.
2017-12-01
Path integral-based methodologies play a crucial role for the investigation of nuclear quantum effects by means of computer simulations. However, these techniques are significantly more demanding than corresponding classical simulations. To reduce this numerical effort, we recently proposed a method, based on a rigorous Hamiltonian formulation, which restricts the quantum modeling to a small but relevant spatial region within a larger reservoir where particles are treated classically. In this work, we extend this idea and show how it can be implemented along with state-of-the-art path integral simulation techniques, including path-integral molecular dynamics, which allows for the calculation of quantum statistical properties, and ring-polymer and centroid molecular dynamics, which allow the calculation of approximate quantum dynamical properties. To this end, we derive a new integration algorithm that also makes use of multiple time-stepping. The scheme is validated via adaptive classical-path-integral simulations of liquid water. Potential applications of the proposed multiresolution method are diverse and include efficient quantum simulations of interfaces as well as complex biomolecular systems such as membranes and proteins.
Measurements of Deposition, Lung Surface Area and Lung Fluid for Simulation of Inhaled Compounds.
Fröhlich, Eleonore; Mercuri, Annalisa; Wu, Shengqian; Salar-Behzadi, Sharareh
2016-01-01
Modern strategies in drug development employ in silico techniques in the design of compounds as well as estimations of pharmacokinetics, pharmacodynamics and toxicity parameters. The quality of the results depends on software algorithm, data library and input data. Compared to simulations of absorption, distribution, metabolism, excretion, and toxicity of oral drug compounds, relatively few studies report predictions of pharmacokinetics and pharmacodynamics of inhaled substances. For calculation of the drug concentration at the absorption site, the pulmonary epithelium, physiological parameters such as lung surface and distribution volume (lung lining fluid) have to be known. These parameters can only be determined by invasive techniques and by postmortem studies. Very different values have been reported in the literature. This review addresses the state of software programs for simulation of orally inhaled substances and focuses on problems in the determination of particle deposition, lung surface and of lung lining fluid. The different surface areas for deposition and for drug absorption are difficult to include directly into the simulations. As drug levels are influenced by multiple parameters the role of single parameters in the simulations cannot be identified easily.
NASA Astrophysics Data System (ADS)
Joglekar, Prasad; Shastry, Karthik; Satyal, Suman; Weiss, Alexander
2011-10-01
Time of Flight Positron Annihilation Induced Auger Electron Spectroscopy (T-O-F PAES) is a highly surface selective analytical technique in which elemental identification is accomplished through a measurement of the flight time distributions of Auger electrons resulting from the annihilation of core electron by positrons. SIMION charged particle optics simulation software was used to model the trajectories both the incident positrons and outgoing electrons in our existing T-O-F PAES system as well as in a new system currently under construction in our laboratory. The implication of these simulation regarding the instrument design and performance are discussed.
Optimal Run Strategies in Monte Carlo Iterated Fission Source Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Lund, Amanda L.; Siegel, Andrew R.
2017-06-19
The method of successive generations used in Monte Carlo simulations of nuclear reactor models is known to suffer from intergenerational correlation between the spatial locations of fission sites. One consequence of the spatial correlation is that the convergence rate of the variance of the mean for a tally becomes worse than O(N–1). In this work, we consider how the true variance can be minimized given a total amount of work available as a function of the number of source particles per generation, the number of active/discarded generations, and the number of independent simulations. We demonstrate through both analysis and simulationmore » that under certain conditions the solution time for highly correlated reactor problems may be significantly reduced either by running an ensemble of multiple independent simulations or simply by increasing the generation size to the extent that it is practical. However, if too many simulations or too large a generation size is used, the large fraction of source particles discarded can result in an increase in variance. We also show that there is a strong incentive to reduce the number of generations discarded through some source convergence acceleration technique. Furthermore, we discuss the efficient execution of large simulations on a parallel computer; we argue that several practical considerations favor using an ensemble of independent simulations over a single simulation with very large generation size.« less
Computer simulations of equilibrium magnetization and microstructure in magnetic fluids
NASA Astrophysics Data System (ADS)
Rosa, A. P.; Abade, G. C.; Cunha, F. R.
2017-09-01
In this work, Monte Carlo and Brownian Dynamics simulations are developed to compute the equilibrium magnetization of a magnetic fluid under action of a homogeneous applied magnetic field. The particles are free of inertia and modeled as hard spheres with the same diameters. Two different periodic boundary conditions are implemented: the minimum image method and Ewald summation technique by replicating a finite number of particles throughout the suspension volume. A comparison of the equilibrium magnetization resulting from the minimum image approach and Ewald sums is performed by using Monte Carlo simulations. The Monte Carlo simulations with minimum image and lattice sums are used to investigate suspension microstructure by computing the important radial pair-distribution function go(r), which measures the probability density of finding a second particle at a distance r from a reference particle. This function provides relevant information on structure formation and its anisotropy through the suspension. The numerical results of go(r) are compared with theoretical predictions based on quite a different approach in the absence of the field and dipole-dipole interactions. A very good quantitative agreement is found for a particle volume fraction of 0.15, providing a validation of the present simulations. In general, the investigated suspensions are dominated by structures like dimmer and trimmer chains with trimmers having probability to form an order of magnitude lower than dimmers. Using Monte Carlo with lattice sums, the density distribution function g2(r) is also examined. Whenever this function is different from zero, it indicates structure-anisotropy in the suspension. The dependence of the equilibrium magnetization on the applied field, the magnetic particle volume fraction, and the magnitude of the dipole-dipole magnetic interactions for both boundary conditions are explored in this work. Results show that at dilute regimes and with moderate dipole-dipole interactions, the standard method of minimum image is both accurate and computationally efficient. Otherwise, lattice sums of magnetic particle interactions are required to accelerate convergence of the equilibrium magnetization. The accuracy of the numerical code is also quantitatively verified by comparing the magnetization obtained from numerical results with asymptotic predictions of high order in the particle volume fraction, in the presence of dipole-dipole interactions. In addition, Brownian Dynamics simulations are used in order to examine magnetization relaxation of a ferrofluid and to calculate the magnetic relaxation time as a function of the magnetic particle interaction strength for a given particle volume fraction and a non-dimensional applied field. The simulations of magnetization relaxation have shown the existence of a critical value of the dipole-dipole interaction parameter. For strength of the interactions below the critical value at a given particle volume fraction, the magnetic relaxation time is close to the Brownian relaxation time and the suspension has no appreciable memory. On the other hand, for strength of dipole interactions beyond its critical value, the relaxation time increases exponentially with the strength of dipole-dipole interaction. Although we have considered equilibrium conditions, the obtained results have far-reaching implications for the analysis of magnetic suspensions under external flow.
De Backer, A; Martinez, G T; MacArthur, K E; Jones, L; Béché, A; Nellist, P D; Van Aert, S
2015-04-01
Quantitative annular dark field scanning transmission electron microscopy (ADF STEM) has become a powerful technique to characterise nano-particles on an atomic scale. Because of their limited size and beam sensitivity, the atomic structure of such particles may become extremely challenging to determine. Therefore keeping the incoming electron dose to a minimum is important. However, this may reduce the reliability of quantitative ADF STEM which will here be demonstrated for nano-particle atom-counting. Based on experimental ADF STEM images of a real industrial catalyst, we discuss the limits for counting the number of atoms in a projected atomic column with single atom sensitivity. We diagnose these limits by combining a thorough statistical method and detailed image simulations. Copyright © 2014 Elsevier B.V. All rights reserved.
Phoretic and Radiometric Force Measurements on Microparticles in Microgravity Conditions
NASA Technical Reports Server (NTRS)
Davis, E. James
1996-01-01
Thermophoretic, diffusiophoretic and radiometric forces on microparticles are being measured over a wide range of gas phase and particle conditions using electrodynamic levitation of single particles to simulate microgravity conditions. The thermophoretic force, which arises when a particle exists in a gas having a temperature gradient, is measured by levitating an electrically charged particle between heated and cooled plates mounted in a vacuum chamber. The diffusiophoretic force arising from a concentration gradient in the gas phase is measured in a similar manner except that the heat exchangers are coated with liquids to establish a vapor concentration gradient. These phoretic forces and the radiation pressure force acting on a particle are measured directly in terms of the change in the dc field required to levitate the particle with and without the force applied. The apparatus developed for the research and the experimental techniques are discussed, and results obtained by thermophoresis experiments are presented. The determination of the momentum and energy accommodation coefficients associated with molecular collisions between gases molecules and particles and the measurement of the interaction between electromagnetic radiation and small particles are of particular interest.
Electrostatic interactions in soft particle systems: mesoscale simulations of ionic liquids.
Wang, Yong-Lei; Zhu, You-Liang; Lu, Zhong-Yuan; Laaksonen, Aatto
2018-05-21
Computer simulations provide a unique insight into the microscopic details, molecular interactions and dynamic behavior responsible for many distinct physicochemical properties of ionic liquids. Due to the sluggish and heterogeneous dynamics and the long-ranged nanostructured nature of ionic liquids, coarse-grained meso-scale simulations provide an indispensable complement to detailed first-principles calculations and atomistic simulations allowing studies over extended length and time scales with a modest computational cost. Here, we present extensive coarse-grained simulations on a series of ionic liquids of the 1-alkyl-3-methylimidazolium (alkyl = butyl, heptyl-, and decyl-) family with Cl, [BF4], and [PF6] counterions. Liquid densities, microstructures, translational diffusion coefficients, and re-orientational motion of these model ionic liquid systems have been systematically studied over a wide temperature range. The addition of neutral beads in cationic models leads to a transition of liquid morphologies from dispersed apolar beads in a polar framework to that characterized by bi-continuous sponge-like interpenetrating networks in liquid matrices. Translational diffusion coefficients of both cations and anions decrease upon lengthening of the neutral chains in the cationic models and by enlarging molecular sizes of the anionic groups. Similar features are observed in re-orientational motion and time scales of different cationic models within the studied temperature range. The comparison of the liquid properties of the ionic systems with their neutral counterparts indicates that the distinctive microstructures and dynamical quantities of the model ionic liquid systems are intrinsically related to Coulombic interactions. Finally, we compared the computational efficiencies of three linearly scaling O(N log N) Ewald summation methods, the particle-particle particle-mesh method, the particle-mesh Ewald summation method, and the Ewald summation method based on a non-uniform fast Fourier transform technique, to calculate electrostatic interactions. Coarse-grained simulations were performed using the GALAMOST and the GROMACS packages and hardware efficiently utilizing graphics processing units on a set of extended [1-decyl-3-methylimidazolium][BF4] ionic liquid systems of up to 131 072 ion pairs.
Diagnostics and characterization of nanodust and nanodusty plasmas★
NASA Astrophysics Data System (ADS)
Greiner, Franko; Melzer, Andrè; Tadsen, Benjamin; Groth, Sebastian; Killer, Carsten; Kirchschlager, Florian; Wieben, Frank; Pilch, Iris; Krüger, Harald; Block, Dietmar; Piel, Alexander; Wolf, Sebastian
2018-05-01
Plasmas growing or containing nanometric dust particles are widely used and proposed in plasma technological applications for production of nano-crystals and surface deposition. Here, we give a compact review of in situ methods for the diagnostics of nanodust and nanodusty plasmas, which have been developed in the framework of the SFB-TR24 to fully characterize these systems. The methods include kinetic Mie ellipsometry, angular-resolved Mie scattering, and 2D imaging Mie ellipsometry to get information about particle growth processes, particle sizes and particle size distributions. There, also the role of multiple scattering events is analyzed using radiative transfer simulations. Computed tomography and Abel inversion techniques to get the 3D dust density profiles of the particle cloud will be presented. Diagnostics of the dust dynamics yields fundamental dust and plasma properties like particle charges and electron and ion densities. Since nanodusty plasmas usually form dense dust clouds electron depletion (Havnes effect) is found to be significant.
NASA Astrophysics Data System (ADS)
Saitou, Y.
2018-01-01
An SPH (Smoothed Particle Hydrodynamics) simulation code is developed to reproduce our findings on behavior of dust particles, which were obtained in our previous experiments (Phys. Plasmas, 23, 013709 (2016) and Abst. 18th Intern. Cong. Plasma Phys. (Kaohsiung, 2016)). Usually, in an SPH simulation, a smoothed particle is interpreted as a discretized fluid element. Here we regard the particles as dust particles because it is known that behavior of dust particles in complex plasmas can be described using fluid dynamics equations in many cases. Various rotation velocities that are difficult to achieve in the experiment are given to particles at boundaries in the newly developed simulation and motion of particles is investigated. Preliminary results obtained by the simulation are shown.
Visualizing SPH Cataclysmic Variable Accretion Disk Simulations with Blender
NASA Astrophysics Data System (ADS)
Kent, Brian R.; Wood, Matthew A.
2015-01-01
We present innovative ways to use Blender, a 3D graphics package, to visualize smoothed particle hydrodynamics particle data of cataclysmic variable accretion disks. We focus on the methods of shape key data constructs to increasedata i/o and manipulation speed. The implementation of the methods outlined allow for compositing of the various visualization layers into a final animation. The viewing of the disk in 3D from different angles can allow for a visual analysisof the physical system and orbits. The techniques have a wide ranging set of applications in astronomical visualization,including both observation and theoretical data.
Shear Induced Structural Relaxation in a Supercooled Colloidal Liquid
NASA Astrophysics Data System (ADS)
Chen, Dandan; Semwogerere, Denis; Weeks, Eric R.
2009-11-01
Amorphous materials include many common products we use everyday, such as window glass, moisturizer, shaving cream and peanut butter. These materials have liquid-like disordered structure, but keep their shapes like a solid. The rheology of dense amorphous materials under large shear strain is not fully understood, partly due to the difficulty of directly viewing the microscopic details of such materials. We use a colloidal suspension to simulate amorphous materials, and study the shear- induced structural relaxation with fast confocal microscopy. We quantify the plastic rearrangements of the particles using standard analysis techniques based on the motion of the particles.
Facilitation of transscleral drug delivery by drug loaded magnetic polymeric particles.
Mousavikhamene, Zeynab; Abdekhodaie, Mohammad J; Ahmadieh, Hamid
2017-10-01
A unique method was used to facilitate ocular drug delivery from periocular route by drug loaded magnetic sensitive particles. Injection of particles in periocular space along the eye axis followed by application of magnetic field in front of the eye would trigger the magnetic polymeric particles to move along the direction of magnetic force and reside against the outer surface of the sclera. This technique prevents removal of drug in the periocular space, observed in conventional transscleral drug delivery systems and hence higher amount of drug can enter the eye in a longer period of time. The experiments were performed by fresh human sclera and an experimental setup. Experimental setup was designed by side by side diffusion cell and hydrodynamic and thermal simulation of the posterior segment of the eye were applied. Magnetic polymeric particles were synthesized by alginate as a model polymer, iron oxide nanoparticles as a magnetic agent and diclofenac sodium as a model drug and characterized by SEM, TEM, DLS and FT-IR techniques. According to the SEM images, the size range of particles is around 60 to 800nm. The results revealed that the cumulative drug transfer from magnetic sensitive particles across the sclera improves by 70% in the presence of magnetic field. The results of this research show promising method of drug delivery to use magnetic properties to facilitate drug delivery to the back of the eye. Copyright © 2017. Published by Elsevier B.V.
TOPAS Tool for Particle Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perl, Joseph
2013-05-30
TOPAS lets users simulate the passage of subatomic particles moving through any kind of radiation therapy treatment system, can import a patient geometry, can record dose and other quantities, has advanced graphics, and is fully four-dimensional (3D plus time) to handle the most challenging time-dependent aspects of modern cancer treatments.TOPAS unlocks the power of the most accurate particle transport simulation technique, the Monte Carlo (MC) method, while removing the painstaking coding work such methods used to require. Research physicists can use TOPAS to improve delivery systems towards safer and more effective radiation therapy treatments, easily setting up and running complexmore » simulations that previously used to take months of preparation. Clinical physicists can use TOPAS to increase accuracy while reducing side effects, simulating patient-specific treatment plans at the touch of a button. TOPAS is designed as a user code layered on top of the Geant4 Simulation Toolkit. TOPAS includes the standard Geant4 toolkit, plus additional code to make Geant4 easier to control and to extend Geant4 functionality. TOPAS aims to make proton simulation both reliable and repeatable. Reliable means both accurate physics and a high likelihood to simulate precisely what the user intended to simulate, reducing issues of wrong units, wrong materials, wrong scoring locations, etc. Repeatable means not just getting the same result from one simulation to another, but being able to easily restore a previously used setup and reducing sources of error when a setup is passed from one user to another. TOPAS control system incorporates key lessons from safety management, proactively removing possible sources of user error such as line-ordering mistakes In control files. TOPAS has been used to model proton therapy treatment examples including the UCSF eye treatment head, the MGH stereotactic alignment in radiosurgery treatment head and the MGH gantry treatment heads in passive scattering and scanning modes, and has demonstrated dose calculation based on patient-specific CT data.« less
Kalman filter approach for uncertainty quantification in time-resolved laser-induced incandescence.
Hadwin, Paul J; Sipkens, Timothy A; Thomson, Kevin A; Liu, Fengshan; Daun, Kyle J
2018-03-01
Time-resolved laser-induced incandescence (TiRe-LII) data can be used to infer spatially and temporally resolved volume fractions and primary particle size distributions of soot-laden aerosols, but these estimates are corrupted by measurement noise as well as uncertainties in the spectroscopic and heat transfer submodels used to interpret the data. Estimates of the temperature, concentration, and size distribution of soot primary particles within a sample aerosol are typically made by nonlinear regression of modeled spectral incandescence decay, or effective temperature decay, to experimental data. In this work, we employ nonstationary Bayesian estimation techniques to infer aerosol properties from simulated and experimental LII signals, specifically the extended Kalman filter and Schmidt-Kalman filter. These techniques exploit the time-varying nature of both the measurements and the models, and they reveal how uncertainty in the estimates computed from TiRe-LII data evolves over time. Both techniques perform better when compared with standard deterministic estimates; however, we demonstrate that the Schmidt-Kalman filter produces more realistic uncertainty estimates.
Experiment and simulation study of laser dicing silicon with water-jet
NASA Astrophysics Data System (ADS)
Bao, Jiading; Long, Yuhong; Tong, Youqun; Yang, Xiaoqing; Zhang, Bin; Zhou, Zupeng
2016-11-01
Water-jet laser processing is an internationally advanced technique, which combines the advantages of laser processing with water jet cutting. In the study, the experiment of water-jet laser dicing are conducted with ns pulsed laser of 1064 nm irradiating, and Smooth Particle Hydrodynamic (SPH) technique by AUTODYN software was modeled to research the fluid dynamics of water and melt when water jet impacting molten material. The silicon surface morphology of the irradiated spots has an appearance as one can see in porous formation. The surface morphology exhibits a large number of cavities which indicates as bubble nucleation sites. The observed surface morphology shows that the explosive melt expulsion could be a dominant process for the laser ablating silicon in liquids with nanosecond pulse laser of 1064 nm irradiating. Self-focusing phenomenon was found and its causes are analyzed. Smooth Particle Hydrodynamic (SPH) modeling technique was employed to understand the effect of water and water-jet on debris removal during water-jet laser machining.
Fuzzy controller training using particle swarm optimization for nonlinear system control.
Karakuzu, Cihan
2008-04-01
This paper proposes and describes an effective utilization of particle swarm optimization (PSO) to train a Takagi-Sugeno (TS)-type fuzzy controller. Performance evaluation of the proposed fuzzy training method using the obtained simulation results is provided with two samples of highly nonlinear systems: a continuous stirred tank reactor (CSTR) and a Van der Pol (VDP) oscillator. The superiority of the proposed learning technique is that there is no need for a partial derivative with respect to the parameter for learning. This fuzzy learning technique is suitable for real-time implementation, especially if the system model is unknown and a supervised training cannot be run. In this study, all parameters of the controller are optimized with PSO in order to prove that a fuzzy controller trained by PSO exhibits a good control performance.
NASA Astrophysics Data System (ADS)
Wu, S.; Yan, Y.; Du, Z.; Zhang, F.; Liu, R.
2017-10-01
The ocean carbon cycle has a significant influence on global climate, and is commonly evaluated using time-series satellite-derived CO2 flux data. Location-aware and globe-based visualization is an important technique for analyzing and presenting the evolution of climate change. To achieve realistic simulation of the spatiotemporal dynamics of ocean carbon, a cloud-driven digital earth platform is developed to support the interactive analysis and display of multi-geospatial data, and an original visualization method based on our digital earth is proposed to demonstrate the spatiotemporal variations of carbon sinks and sources using time-series satellite data. Specifically, a volume rendering technique using half-angle slicing and particle system is implemented to dynamically display the released or absorbed CO2 gas. To enable location-aware visualization within the virtual globe, we present a 3D particlemapping algorithm to render particle-slicing textures onto geospace. In addition, a GPU-based interpolation framework using CUDA during real-time rendering is designed to obtain smooth effects in both spatial and temporal dimensions. To demonstrate the capabilities of the proposed method, a series of satellite data is applied to simulate the air-sea carbon cycle in the China Sea. The results show that the suggested strategies provide realistic simulation effects and acceptable interactive performance on the digital earth.
Integrated devices for quantum information and quantum simulation with polarization encoded qubits
NASA Astrophysics Data System (ADS)
Sansoni, Linda; Sciarrino, Fabio; Mataloni, Paolo; Crespi, Andrea; Ramponi, Roberta; Osellame, Roberto
2012-06-01
The ability to manipulate quantum states of light by integrated devices may open new perspectives both for fundamental tests of quantum mechanics and for novel technological applications. The technology for handling polarization-encoded qubits, the most commonly adopted approach, was still missing in quantum optical circuits until the ultrafast laser writing (ULW) technique was adopted for the first time to realize integrated devices able to support and manipulate polarization encoded qubits.1 Thanks to this method, polarization dependent and independent devices can be realized. In particular the maintenance of polarization entanglement was demonstrated in a balanced polarization independent integrated beam splitter1 and an integrated CNOT gate for polarization qubits was realized and carachterized.2 We also exploited integrated optics for quantum simulation tasks: by adopting the ULW technique an integrated quantum walk circuit was realized3 and, for the first time, we investigate how the particle statistics, either bosonic or fermionic, influences a two-particle discrete quantum walk. Such experiment has been realized by adopting two-photon entangled states and an array of integrated symmetric directional couplers. The polarization entanglement was exploited to simulate the bunching-antibunching feature of non interacting bosons and fermions. To this scope a novel three-dimensional geometry for the waveguide circuit is introduced, which allows accurate polarization independent behaviour, maintaining a remarkable control on both phase and balancement of the directional couplers.
Concept and numerical simulations of a reactive anti-fragment armour layer
NASA Astrophysics Data System (ADS)
Hušek, Martin; Kala, Jiří; Král, Petr; Hokeš, Filip
2017-07-01
The contribution describes the concept and numerical simulation of a ballistic protective layer which is able to actively resist projectiles or smaller colliding fragments flying at high speed. The principle of the layer was designed on the basis of the action/reaction system of reactive armour which is used for the protection of armoured vehicles. As the designed ballistic layer consists of steel plates simultaneously combined with explosive material - primary explosive and secondary explosive - the technique of coupling the Finite Element Method with Smoothed Particle Hydrodynamics was used for the simulations. Certain standard situations which the ballistic layer should resist were simulated. The contribution describes the principles for the successful execution of numerical simulations, their results, and an evaluation of the functionality of the ballistic layer.
Simulations to Predict the Phase Behavior and Structure of Multipolar Colloidal Particles
NASA Astrophysics Data System (ADS)
Rutkowski, David Matthew
Colloidal particles with anisotropic charge distributions can assemble into a number of interesting structures including chains, lattices and micelles that could be useful in biotechnology, optics and electronics. The goal of this work is to understand how the properties of the colloidal particles, such as their charge distribution or shape, affect the selfassembly and phase behavior of collections of such particles. The specific aim of this work is to understand how the separation between a pair of oppositely signed charges affects the phase behavior and structure of assemblies of colloidal particles. To examine these particles, we have used both discontinuous molecular dynamics (DMD) and Monte Carlo (MC) simulation techniques. In our first study of colloidal particles with finite charge separation, we simulate systems of 2-D colloidal rods with four possible charge separations. Our simulations show that the charge separation does indeed have a large effect on the phase behavior as can be seen in the phase diagrams we construct for these four systems in the area fraction-reduced temperature plane. The phase diagrams delineate the boundaries between isotropic fluid, string-fluid and percolated fluid for all systems considered. In particular, we find that coarse gel-like structures tend to form at large charge separations while denser aggregates form at small charge separations, suggesting a route to forming low volume gels by focusing on systems with large charge separations. Next we examine systems of circular particles with four embedded charges of alternating sign fixed to a triangular lattice. This system is found to form a limit periodic structure, a theoretical structure with an infinite number of phase transitions, under specific conditions. The limit-periodic structure only forms when the rotation of the particles in the system is restricted to increments of pi/3. When the rotation is restricted to increments of th/6 or the rotation is continuous, related structures form including a striped phase and a phase with nematic order. Neither the distance from the point charges to the center of the particle nor the angle between the charges influences whether the system forms a limit-periodic structure, suggesting that point quadrupoles may also be able to form limit-periodic structures. Results from these simulations will likely aid in the quest to find an experimental realization of a limit-periodic structure. Next we examine the effect of charge separation on the self-assembly of systems of 2-D colloidal particles with off-center extended dipoles. We simulate systems with both small and large charge separations for a set of displacements of the dipole from the particle center. Upon cooling, these particles self-assemble into closed, cyclic structures at large displacements including dimers, triangular shapes and square shapes, and chain-like structures at small displacements. At extremely low temperatures, the cyclic structures form interesting lattices with particles of similar chirality grouped together. Results from this work could aid in the experimental construction of open lattice-like structures that could find use in photonic applications. Finally, we present work in collaboration with Drs. Bhuvnesh Bharti and Orlin Velev in which we investigate how the surface coverage affects the self-assembly of systems of Janus particles coated with both an iron oxide and fatty acid chain layer. We model these particles by decorating a sphere with evenly dispersed points that interact with points on other spheres through square-well interactions. The interactions are designed to mimic specific coverage values for the iron oxide/fatty acid chain layer. Structures similar to those found in experiment form readily in the simulations. The number of clusters formed as a function of surface coverage agrees well with experiment. The aggregation behavior of these novel particles can therefore, be described by a relatively simple model.
Quantitative computer simulations of extraterrestrial processing operations
NASA Technical Reports Server (NTRS)
Vincent, T. L.; Nikravesh, P. E.
1989-01-01
The automation of a small, solid propellant mixer was studied. Temperature control is under investigation. A numerical simulation of the system is under development and will be tested using different control options. Control system hardware is currently being put into place. The construction of mathematical models and simulation techniques for understanding various engineering processes is also studied. Computer graphics packages were utilized for better visualization of the simulation results. The mechanical mixing of propellants is examined. Simulation of the mixing process is being done to study how one can control for chaotic behavior to meet specified mixing requirements. An experimental mixing chamber is also being built. It will allow visual tracking of particles under mixing. The experimental unit will be used to test ideas from chaos theory, as well as to verify simulation results. This project has applications to extraterrestrial propellant quality and reliability.
Preliminary studies of the effect of thinning techniques over muon production profiles
NASA Astrophysics Data System (ADS)
Tomishiyo, G.; Souza, V.
2017-06-01
In the context of air shower simulations, thinning techniques are employed to reduce computational time and storage requirements. These techniques are tailored to preserve locally mean quantities during shower development, such as the average number of particles in a given atmosphere layer, and to not induce systematic shifts in shower observables, such as the depth of shower maximum. In this work we investigate thinning effects on the determination of the depth in which the shower has the maximum muon production {X}\\max μ -{sim}. We show preliminary results in which the thinning factor and maximum thinning weight might influence the determination of {X}\\max μ -{sim}
Optical technique to study the impact of heavy rain on aircraft performance
NASA Technical Reports Server (NTRS)
Hess, C. F.; Li, F.
1985-01-01
A laser based technique was investigated and shown to have the potential to obtain measurements of the size and velocity of water droplets used in a wind tunnel to simulate rain. A theoretical model was developed which included some simple effects due to droplet nonsphericity. Parametric studies included the variation of collection distance (up to 4 m), angle of collection, effect of beam interference by the spray, and droplet shape. Accurate measurements were obtained under extremely high liquid water content and spray interference. The technique finds applications in the characterization of two phase flows where the size and velocity of particles are needed.
Optical property retrievals of subvisual cirrus clouds from OSIRIS limb-scatter measurements
NASA Astrophysics Data System (ADS)
Wiensz, J. T.; Degenstein, D. A.; Lloyd, N. D.; Bourassa, A. E.
2012-08-01
We present a technique for retrieving the optical properties of subvisual cirrus clouds detected by OSIRIS, a limb-viewing satellite instrument that measures scattered radiances from the UV to the near-IR. The measurement set is composed of a ratio of limb radiance profiles at two wavelengths that indicates the presence of cloud-scattering regions. Optical properties from an in-situ database are used to simulate scattering by cloud-particles. With appropriate configurations discussed in this paper, the SASKTRAN successive-orders of scatter radiative transfer model is able to simulate accurately the in-cloud radiances from OSIRIS. Configured in this way, the model is used with a multiplicative algebraic reconstruction technique (MART) to retrieve the cloud extinction profile for an assumed effective cloud particle size. The sensitivity of these retrievals to key auxiliary model parameters is shown, and it is demonstrated that the retrieved extinction profile models accurately the measured in-cloud radiances from OSIRIS. Since OSIRIS has an 11-yr record of subvisual cirrus cloud detections, the work described in this manuscript provides a very useful method for providing a long-term global record of the properties of these clouds.
NASA Astrophysics Data System (ADS)
Gama Goicochea, A.; Balderas Altamirano, M. A.; Lopez-Esparza, R.; Waldo-Mendoza, Miguel A.; Perez, E.
2015-09-01
The connection between fundamental interactions acting in molecules in a fluid and macroscopically measured properties, such as the viscosity between colloidal particles coated with polymers, is studied here. The role that hydrodynamic and Brownian forces play in colloidal dispersions is also discussed. It is argued that many-body systems in which all these interactions take place can be accurately solved using computational simulation tools. One of those modern tools is the technique known as dissipative particle dynamics, which incorporates Brownian and hydrodynamic forces, as well as basic conservative interactions. A case study is reported, as an example of the applications of this technique, which consists of the prediction of the viscosity and friction between two opposing parallel surfaces covered with polymer chains, under the influence of a steady flow. This work is intended to serve as an introduction to the subject of colloidal dispersions and computer simulations, for final-year undergraduate students and beginning graduate students who are interested in beginning research in soft matter systems. To that end, a computational code is included that students can use right away to study complex fluids in equilibrium.
DREAM: An Efficient Methodology for DSMC Simulation of Unsteady Processes
NASA Astrophysics Data System (ADS)
Cave, H. M.; Jermy, M. C.; Tseng, K. C.; Wu, J. S.
2008-12-01
A technique called the DSMC Rapid Ensemble Averaging Method (DREAM) for reducing the statistical scatter in the output from unsteady DSMC simulations is introduced. During post-processing by DREAM, the DSMC algorithm is re-run multiple times over a short period before the temporal point of interest thus building up a combination of time- and ensemble-averaged sampling data. The particle data is regenerated several mean collision times before the output time using the particle data generated during the original DSMC run. This methodology conserves the original phase space data from the DSMC run and so is suitable for reducing the statistical scatter in highly non-equilibrium flows. In this paper, the DREAM-II method is investigated and verified in detail. Propagating shock waves at high Mach numbers (Mach 8 and 12) are simulated using a parallel DSMC code (PDSC) and then post-processed using DREAM. The ability of DREAM to obtain the correct particle velocity distribution in the shock structure is demonstrated and the reduction of statistical scatter in the output macroscopic properties is measured. DREAM is also used to reduce the statistical scatter in the results from the interaction of a Mach 4 shock with a square cavity and for the interaction of a Mach 12 shock on a wedge in a channel.
Viscosity of α-pinene secondary organic material and implications for particle growth and reactivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renbaum-Wolff, Lindsay; Grayson, James W.; Bateman, Adam P.
Particles composed of secondary organic material (SOM) are abundant in the lower troposphere and play important roles in climate, air quality, and health. The viscosity of these particles is a fundamental property that is presently poorly quantified for conditions relevant to the lower troposphere. Using two new techniques, namely a bead-mobility technique and a poke-flow technique, in conjunction with simulations of fluid flow, we measure the viscosity of the watersoluble component of SOM produced by α-pinene ozonolysis. The viscosity is comparable to that of honey at 90% relative humidity (RH), comparable to that of peanut butter at 70% RH andmore » greater than or comparable to that of bitumen for ≤ 30% RH, implying that the studied SOM ranges from liquid to semisolid/solid at ambient relative humidities. With the Stokes-Einstein relation, the measured viscosities further imply that the growth and evaporation of SOM by the exchange of organic molecules between the gas and condensed phases may be confined to the surface region when RH ≤ 30%, suggesting the importance of an adsorption-type mechanism for partitioning in this regime. By comparison, for RH ≥ 70% partitioning of organic molecules may effectively occur by an absorption mechanism throughout the bulk of the particle. Finally, the net uptake rates of semi-reactive atmospheric oxidants such as O3 are expected to decrease by two to five orders of magnitude for a change in RH from 90% to ≤ 30% RH, with possible implications for the rates of chemical aging of SOM particles in the atmosphere.« less
NASA Astrophysics Data System (ADS)
Kawamori, E.; Igami, H.
2017-11-01
A diagnostic technique for detecting the wave numbers of electron density fluctuations at electron gyro-scales in an electron cyclotron frequency range is proposed, and the validity of the idea is checked by means of a particle-in-cell (PIC) numerical simulation. The technique is a modified version of the scattering technique invented by Novik et al. [Plasma Phys. Controlled Fusion 36, 357-381 (1994)] and Gusakov et al., [Plasma Phys. Controlled Fusion 41, 899-912 (1999)]. The novel method adopts forward scattering of injected extraordinary probe waves at the upper hybrid resonance layer instead of the backward-scattering adopted by the original method, enabling the measurement of the wave-numbers of the fine scale density fluctuations in the electron-cyclotron frequency band by means of phase measurement of the scattered waves. The verification numerical simulation with the PIC method shows that the technique has a potential to be applicable to the detection of electron gyro-scale fluctuations in laboratory plasmas if the upper-hybrid resonance layer is accessible to the probe wave. The technique is a suitable means to detect electron Bernstein waves excited via linear mode conversion from electromagnetic waves in torus plasma experiments. Through the numerical simulations, some problems that remain to be resolved are revealed, which include the influence of nonlinear processes such as the parametric decay instability of the probe wave in the scattering process, and so on.
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less
NASA Astrophysics Data System (ADS)
Everett, Samantha
2010-10-01
A transmission curve experiment was carried out to measure the range of beta particles in aluminum in the health physics laboratory located on the campus of Texas Southern University. The transmission count rate through aluminum for varying radiation lengths was measured using beta particles emitted from a low activity (˜1 μCi) Sr-90 source. The count rate intensity was recorded using a Geiger Mueller tube (SGC N210/BNC) with an active volume of 61 cm^3 within a systematic detection accuracy of a few percent. We compared these data with a realistic simulation of the experimental setup using the Geant4 Monte Carlo toolkit (version 9.3). The purpose of this study was to benchmark our Monte Carlo for future experiments as part of a more comprehensive research program. Transmission curves were simulated based on the standard and low-energy electromagnetic physics models, and using the radioactive decay module for the electrons primary energy distribution. To ensure the validity of our measurements, linear extrapolation techniques were employed to determine the in-medium beta particle range from the measured data and was found to be 1.87 g/cm^2 (˜0.693 cm), in agreement with literature values. We found that the general shape of the measured data and simulated curves were comparable; however, a discrepancy in the relative count rates was observed. The origin of this disagreement is still under investigation.
NASA Astrophysics Data System (ADS)
Bai, Xian-Ming; Shah, Binoy; Keer, Leon; Wang, Jane; Snurr, Randall
2008-03-01
Mechanical damping systems with granular particles as the damping media have promising applications in extreme temperature conditions. In particle-based damping systems, the mechanical energy is dissipated through the inelastic collision and friction of particles. In the past, many experiments have been performed to investigate the particle damping problems. However, the detailed energy dissipation mechanism is still unclear due to the complex collision and flow behavior of dense particles. In this work, we use 3-D particle dynamics simulation to investigate the damping mechanism of an oscillating cylinder piston immerged in millimeter-size steel particles. The time evolution of the energy dissipation through the friction and inelastic collision is accurately monitored during the damping process. The contribution from the particle-particle interaction and particle-wall interaction is also separated for investigation. The effects of moisture, surface roughness, and density of particles are carefully investigated in the simulation. The comparison between the numerical simulation and experiment is also performed. The simulation results can help us understand the particle damping mechanism and design the new generation of particle damping devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amadio, G.; et al.
An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less
Standing wave acoustic levitation on an annular plate
NASA Astrophysics Data System (ADS)
Kandemir, Mehmet Hakan; Çalışkan, Mehmet
2016-11-01
In standing wave acoustic levitation technique, a standing wave is formed between a source and a reflector. Particles can be attracted towards pressure nodes in standing waves owing to a spring action through which particles can be suspended in air. This operation can be performed on continuous structures as well as in several numbers of axes. In this study an annular acoustic levitation arrangement is introduced. Design features of the arrangement are discussed in detail. Bending modes of the annular plate, known as the most efficient sound generation mechanism in such structures, are focused on. Several types of bending modes of the plate are simulated and evaluated by computer simulations. Waveguides are designed to amplify waves coming from sources of excitation, that are, transducers. With the right positioning of the reflector plate, standing waves are formed in the space between the annular vibrating plate and the reflector plate. Radiation forces are also predicted. It is demonstrated that small particles can be suspended in air at pressure nodes of the standing wave corresponding to a particular bending mode.
Sun, Xiao-Gang; Tang, Hong; Yuan, Gui-Bin
2008-05-01
For the total light scattering particle sizing technique, an inversion and classification method was proposed with the dependent model algorithm. The measured particle system was inversed simultaneously by different particle distribution functions whose mathematic model was known in advance, and then classified according to the inversion errors. The simulation experiments illustrated that it is feasible to use the inversion errors to determine the particle size distribution. The particle size distribution function was obtained accurately at only three wavelengths in the visible light range with the genetic algorithm, and the inversion results were steady and reliable, which decreased the number of multi wavelengths to the greatest extent and increased the selectivity of light source. The single peak distribution inversion error was less than 5% and the bimodal distribution inversion error was less than 10% when 5% stochastic noise was put in the transmission extinction measurement values at two wavelengths. The running time of this method was less than 2 s. The method has advantages of simplicity, rapidity, and suitability for on-line particle size measurement.
Estimation for general birth-death processes.
Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A
2014-04-01
Birth-death processes (BDPs) are continuous-time Markov chains that track the number of "particles" in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution.
SU-E-T-512: Electromagnetic Simulations of the Dielectric Wall Accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uselmann, A; Mackie, T
Purpose: To characterize and parametrically study the key components of a dielectric wall accelerator through electromagnetic modeling and particle tracking. Methods: Electromagnetic and particle tracking simulations were performed using a commercial code (CST Microwave Studio, CST Inc.) utilizing the finite integration technique. A dielectric wall accelerator consists of a series of stacked transmission lines sequentially fired in synchrony with an ion pulse. Numerous properties of the stacked transmission lines, including geometric, material, and electronic properties, were analyzed and varied in order to assess their impact on the transverse and axial electric fields. Additionally, stacks of transmission lines were simulated inmore » order to quantify the parasitic effect observed in closely packed lines. Particle tracking simulations using the particle-in-cell method were performed on the various stacks to determine the impact of the above properties on the resultant phase space of the ions. Results: Examination of the simulation results show that novel geometries can shape the accelerating pulse in order to reduce the energy spread and increase the average energy of accelerated ions. Parasitic effects were quantified for various geometries and found to vary with distance from the end of the transmission line and along the beam axis. An optimal arrival time of an ion pulse relative to the triggering of the transmission lines for a given geometry was determined through parametric study. Benchmark simulations of single transmission lines agree well with published experimental results. Conclusion: This work characterized the behavior of the transmission lines used in a dielectric wall accelerator and used this information to improve them in novel ways. Utilizing novel geometries, we were able to improve the accelerating gradient and phase space of the accelerated particle bunch. Through simulation, we were able to discover and optimize design issues with the device at low cost. Funding: Morgridge Institute for Research, Madison WI; Conflict of Interest: Dr. Mackie is an investor and board member at CPAC, a company developing compact accelerator designs similar to those discussed in this work, but designs discussed are not directed by CPAC. Funding: Morgridge Institute for Research, Madison WI; Conflict of Interest: Dr. Mackie is an investor and board member at CPAC, a company developing compact accelerator designs similar to those discussed in this work, but designs discussed are not directed by CPAC.« less
Motion Imagery and Robotics Application (MIRA): Standards-Based Robotics
NASA Technical Reports Server (NTRS)
Martinez, Lindolfo; Rich, Thomas; Lucord, Steven; Diegelman, Thomas; Mireles, James; Gonzalez, Pete
2012-01-01
This technology development originated from the need to assess the debris threat resulting from soil material erosion induced by landing spacecraft rocket plume impingement on extraterrestrial planetary surfaces. The impact of soil debris was observed to be highly detrimental during NASA s Apollo lunar missions and will pose a threat for any future landings on the Moon, Mars, and other exploration targets. The innovation developed under this program provides a simulation tool that combines modeling of the diverse disciplines of rocket plume impingement gas dynamics, granular soil material liberation, and soil debris particle kinetics into one unified simulation system. The Unified Flow Solver (UFS) developed by CFDRC enabled the efficient, seamless simulation of mixed continuum and rarefied rocket plume flow utilizing a novel direct numerical simulation technique of the Boltzmann gas dynamics equation. The characteristics of the soil granular material response and modeling of the erosion and liberation processes were enabled through novel first principle-based granular mechanics models developed by the University of Florida specifically for the highly irregularly shaped and cohesive lunar regolith material. These tools were integrated into a unique simulation system that accounts for all relevant physics aspects: (1) Modeling of spacecraft rocket plume impingement flow under lunar vacuum environment resulting in a mixed continuum and rarefied flow; (2) Modeling of lunar soil characteristics to capture soil-specific effects of particle size and shape composition, soil layer cohesion and granular flow physics; and (3) Accurate tracking of soil-borne debris particles beginning with aerodynamically driven motion inside the plume to purely ballistic motion in lunar far field conditions.
Forecasting of dissolved oxygen in the Guanting reservoir using an optimized NGBM (1,1) model.
An, Yan; Zou, Zhihong; Zhao, Yanfei
2015-03-01
An optimized nonlinear grey Bernoulli model was proposed by using a particle swarm optimization algorithm to solve the parameter optimization problem. In addition, each item in the first-order accumulated generating sequence was set in turn as an initial condition to determine which alternative would yield the highest forecasting accuracy. To test the forecasting performance, the optimized models with different initial conditions were then used to simulate dissolved oxygen concentrations in the Guanting reservoir inlet and outlet (China). The empirical results show that the optimized model can remarkably improve forecasting accuracy, and the particle swarm optimization technique is a good tool to solve parameter optimization problems. What's more, the optimized model with an initial condition that performs well in in-sample simulation may not do as well as in out-of-sample forecasting. Copyright © 2015. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hager, Robert, E-mail: rhager@pppl.gov; Yoon, E.S., E-mail: yoone@rpi.edu; Ku, S., E-mail: sku@pppl.gov
2016-06-15
Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. In this article, the non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. The finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable onmore » high-performance computing systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. The collision operator's good weak and strong scaling behavior are shown.« less
Hager, Robert; Yoon, E. S.; Ku, S.; ...
2016-04-04
Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. The non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. Moreover, the finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable on high-performance computingmore » systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. As a result, the collision operator's good weak and strong scaling behavior are shown.« less
Analysis of orbital perturbations acting on objects in orbits near geosynchronous earth orbit
NASA Technical Reports Server (NTRS)
Friesen, Larry J.; Jackson, Albert A., IV; Zook, Herbert A.; Kessler, Donald J.
1992-01-01
The paper presents a numerical investigation of orbital evolution for objects started in GEO or in orbits near GEO in order to study potential orbital debris problems in this region. Perturbations simulated include nonspherical terms in the earth's geopotential field, lunar and solar gravity, and solar radiation pressure. Objects simulated include large satellites, for which solar radiation pressure is insignificant, and small particles, for which solar radiation pressure is an important force. Results for large satellites are largely in agreement with previous GEO studies that used classical perturbation techniques. The orbit plane of GEO satellites placed in a stable plane orbit inclined approximately 7.3 deg to the equator experience very little precession, remaining always within 1.2 percent of their initial orientation. Solar radiation pressure generates two major effects on small particles: an orbital eccentricity oscillation anticipated from previous research, and an oscillation in orbital inclination.
Filter Media Tests Under Simulated Martian Atmospheric Conditions
NASA Technical Reports Server (NTRS)
Agui, Juan H.
2016-01-01
Human exploration of Mars will require the optimal utilization of planetary resources. One of its abundant resources is the Martian atmosphere that can be harvested through filtration and chemical processes that purify and separate it into its gaseous and elemental constituents. Effective filtration needs to be part of the suite of resource utilization technologies. A unique testing platform is being used which provides the relevant operational and instrumental capabilities to test articles under the proper simulated Martian conditions. A series of tests were conducted to assess the performance of filter media. Light sheet imaging of the particle flow provided a means of detecting and quantifying particle concentrations to determine capturing efficiencies. The media's efficiency was also evaluated by gravimetric means through a by-layer filter media configuration. These tests will help to establish techniques and methods for measuring capturing efficiency and arrestance of conventional fibrous filter media. This paper will describe initial test results on different filter media.
Simulations of dolphin kick swimming using smoothed particle hydrodynamics.
Cohen, Raymond C Z; Cleary, Paul W; Mason, Bruce R
2012-06-01
In competitive human swimming the submerged dolphin kick stroke (underwater undulatory swimming) is utilized after dives and turns. The optimal dolphin kick has a balance between minimizing drag and maximizing thrust while also minimizing the physical exertion required of the swimmer. In this study laser scans of athletes are used to provide realistic swimmer geometries in a single anatomical pose. These are rigged and animated to closely match side-on video footage. Smoothed Particle Hydrodynamics (SPH) fluid simulations are performed to evaluate variants of this swimming stroke technique. This computational approach provides full temporal and spatial information about the flow moving around the deforming swimmer model. The effects of changes in ankle flexibility and stroke frequency are investigated through a parametric study. The results suggest that the net streamwise force on the swimmer is relatively insensitive to ankle flexibility but is strongly dependent on kick frequency. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.
Energetic Particle Loss Estimates in W7-X
NASA Astrophysics Data System (ADS)
Lazerson, Samuel; Akaslompolo, Simppa; Drevlak, Micheal; Wolf, Robert; Darrow, Douglass; Gates, David; W7-X Team
2017-10-01
The collisionless loss of high energy H+ and D+ ions in the W7-X device are examined using the BEAMS3D code. Simulations of collisionless losses are performed for a large ensemble of particles distributed over various flux surfaces. A clear loss cone of particles is present in the distribution for all particles. These simulations are compared against slowing down simulations in which electron impact, ion impact, and pitch angle scattering are considered. Full device simulations allow tracing of particle trajectories to the first wall components. These simulations provide estimates for placement of a novel set of energetic particle detectors. Recent performance upgrades to the code are allowing simulations with > 1000 processors providing high fidelity simulations. Speedup and future works are discussed. DE-AC02-09CH11466.
Factors controlling the evaporation of secondary organic aerosol from α‐pinene ozonolysis
Pajunoja, Aki; Tikkanen, Olli‐Pekka; Buchholz, Angela; Faiola, Celia; Väisänen, Olli; Hao, Liqing; Kari, Eetu; Peräkylä, Otso; Garmash, Olga; Shiraiwa, Manabu; Ehn, Mikael; Lehtinen, Kari; Virtanen, Annele
2017-01-01
Abstract Secondary organic aerosols (SOA) forms a major fraction of organic aerosols in the atmosphere. Knowledge of SOA properties that affect their dynamics in the atmosphere is needed for improving climate models. By combining experimental and modeling techniques, we investigated the factors controlling SOA evaporation under different humidity conditions. Our experiments support the conclusion of particle phase diffusivity limiting the evaporation under dry conditions. Viscosity of particles at dry conditions was estimated to increase several orders of magnitude during evaporation, up to 109 Pa s. However, at atmospherically relevant relative humidity and time scales, our results show that diffusion limitations may have a minor effect on evaporation of the studied α‐pinene SOA particles. Based on previous studies and our model simulations, we suggest that, in warm environments dominated by biogenic emissions, the major uncertainty in models describing the SOA particle evaporation is related to the volatility of SOA constituents. PMID:28503004
A method for measuring particle number emissions from vehicles driving on the road.
Shi, J P; Harrison, R M; Evans, D E; Alam, A; Barnes, C; Carter, G
2002-01-01
Earlier research has demonstrated that the conditions of dilution of engine exhaust gases profoundly influence the size distribution and total number of particles emitted. Since real world dilution conditions are variable and therefore difficult to simulate, this research has sought to develop and validate a method for measuring particle number emissions from vehicles driving past on a road. This has been achieved successfully using carbon dioxide as a tracer of exhaust gas dilution. By subsequent adjustment of data to a constant dilution factor, it is possible to compare emissions from different vehicles using different technologies and fuels based upon real world emission data. Whilst further optimisation of the technique, especially in terms of matching the instrument response times is desirable, the measurements offer useful insights into emissions from gasoline and diesel vehicles, and the substantial proportion of particles emitted in the 3-7 nanometre size range.
Correlation Characterization of Particles in Volume Based on Peak-to-Basement Ratio
Vovk, Tatiana A.; Petrov, Nikolay V.
2017-01-01
We propose a new express method of the correlation characterization of the particles suspended in the volume of optically transparent medium. It utilizes inline digital holography technique for obtaining two images of the adjacent layers from the investigated volume with subsequent matching of the cross-correlation function peak-to-basement ratio calculated for these images. After preliminary calibration via numerical simulation, the proposed method allows one to quickly distinguish parameters of the particle distribution and evaluate their concentration. The experimental verification was carried out for the two types of physical suspensions. Our method can be applied in environmental and biological research, which includes analyzing tools in flow cytometry devices, express characterization of particles and biological cells in air and water media, and various technical tasks, e.g. the study of scattering objects or rapid determination of cutting tool conditions in mechanisms. PMID:28252020
Computed Intranasal Spray Penetration: Comparisons Before and After Nasal Surgery
Frank, Dennis O.; Kimbell, Julia S.; Cannon, Daniel; Rhee, John S.
2012-01-01
Background Quantitative methods for comparing intranasal drug delivery efficiencies pre- and postoperatively have not been fully utilized. The objective of this study is to use computational fluid dynamics techniques to evaluate aqueous nasal spray penetration efficiencies before and after surgical correction of intranasal anatomic deformities. Methods Ten three-dimensional models of the nasal cavities were created from pre- and postoperative computed tomography scans in five subjects. Spray simulations were conducted using a particle size distribution ranging from 10–110μm, a spray speed of 3m/s, plume angle of 68°, and with steady state, resting inspiratory airflow present. Two different nozzle positions were compared. Statistical analysis was conducted using Student T-test for matched pairs. Results On the obstructed side, posterior particle deposition after surgery increased by 118% and was statistically significant (p-value=0.036), while anterior particle deposition decreased by 13% and was also statistically significant (p-value=0.020). The fraction of particles that by-passed the airways either pre- or post-operatively was less than 5%. Posterior particle deposition differences between obstructed and contralateral sides of the airways were 113% and 30% for pre- and post-surgery, respectively. Results showed that nozzle positions can influence spray delivery. Conclusions Simulations predicted that surgical correction of nasal anatomic deformities can improve spray penetration to areas where medications can have greater effect. Particle deposition patterns between both sides of the airways are more evenly distributed after surgery. These findings suggest that correcting anatomic deformities may improve intranasal medication delivery. For enhanced particle penetration, patients with nasal deformities may explore different nozzle positions. PMID:22927179
NASA Astrophysics Data System (ADS)
Yoshioka, Toshie; Miyoshi, Takashi; Takaya, Yasuhiro
2005-12-01
To realize high productivity and reliability of the semiconductor, patterned wafers inspection technology to maintain high yield becomes essential in modern semiconductor manufacturing processes. As circuit feature is scaled below 100nm, the conventional imaging and light scattering methods are impossible to apply to the patterned wafers inspection technique, because of diffraction limit and lower S/N ratio. So, we propose a new particle detection method using annular evanescent light illumination. In this method, a converging annular light used as a light source is incident on a micro-hemispherical lens. When the converging angle is larger than critical angle, annular evanescent light is generated under the bottom surface of the hemispherical lens. Evanescent light is localized near by the bottom surface and decays exponentially away from the bottom surface. So, the evanescent light selectively illuminates the particles on the patterned wafer surface, because it can't illuminate the patterned wafer surface. The proposed method evaluates particles on a patterned wafer surface by detecting scattered evanescent light distribution from particles. To analyze the fundamental characteristics of the proposed method, the computer simulation was performed using FDTD method. The simulation results show that the proposed method is effective for detecting 100nm size particle on patterned wafer of 100nm lines and spaces, particularly under the condition that the evanescent light illumination with p-polarization and parallel incident to the line orientation. Finally, the experiment results suggest that 220nm size particle on patterned wafer of about 200nm lines and spaces can be detected.
Deformation of Soft Tissue and Force Feedback Using the Smoothed Particle Hydrodynamics
Liu, Xuemei; Wang, Ruiyi; Li, Yunhua; Song, Dongdong
2015-01-01
We study the deformation and haptic feedback of soft tissue in virtual surgery based on a liver model by using a force feedback device named PHANTOM OMNI developed by SensAble Company in USA. Although a significant amount of research efforts have been dedicated to simulating the behaviors of soft tissue and implementing force feedback, it is still a challenging problem. This paper introduces a kind of meshfree method for deformation simulation of soft tissue and force computation based on viscoelastic mechanical model and smoothed particle hydrodynamics (SPH). Firstly, viscoelastic model can present the mechanical characteristics of soft tissue which greatly promotes the realism. Secondly, SPH has features of meshless technique and self-adaption, which supply higher precision than methods based on meshes for force feedback computation. Finally, a SPH method based on dynamic interaction area is proposed to improve the real time performance of simulation. The results reveal that SPH methodology is suitable for simulating soft tissue deformation and force feedback calculation, and SPH based on dynamic local interaction area has a higher computational efficiency significantly compared with usual SPH. Our algorithm has a bright prospect in the area of virtual surgery. PMID:26417380
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarisien, M.; Plaisir, C.; Gobet, F.
2011-02-15
We present a stand-alone system to characterize the high-energy particles emitted in the interaction of ultrahigh intensity laser pulses with matter. According to the laser and target characteristics, electrons or protons are produced with energies higher than a few mega electron volts. Selected material samples can, therefore, be activated via nuclear reactions. A multidetector, named NATALIE, has been developed to count the {beta}{sup +} activity of these irradiated samples. The coincidence technique used, designed in an integrated system, results in very low background in the data, which is required for low activity measurements. It, therefore, allows a good precision onmore » the nuclear activation yields of the produced radionuclides. The system allows high counting rates and online correction of the dead time. It also provides, online, a quick control of the experiment. Geant4 simulations are used at different steps of the data analysis to deduce, from the measured activities, the energy and angular distributions of the laser-induced particle beams. Two applications are presented to illustrate the characterization of electrons and protons.« less
Preparation and optical properties of iron-modified titanium dioxide obtained by sol-gel method
NASA Astrophysics Data System (ADS)
Hreniak, Agnieszka; Gryzło, Katarzyna; Boharewicz, Bartosz; Sikora, Andrzej; Chmielowiec, Jacek; Iwan, Agnieszka
2015-08-01
In this paper twelve TiO2:Fe powders prepared by sol-gel method were analyzed being into consideration the kind of iron compound applied. As a precursor titanium (IV) isopropoxide (TIPO) was used, while as source of iron Fe(NO3)3 or FeCl3 were tested. Fe doped TiO2 was obtained using two methods of synthesis, where different amount of iron was added (1, 5 or 10% w/w). The size of obtained TiO2:Fe particles depends on the iron compound applied and was found in the range 80-300 nm as it was confirmed by SEM technique. TiO2:Fe particles were additionally investigated by dynamic light scattering (DLS) method. Additionally, for the TiO2:Fe particles UV-vis absorption and the zeta potential were analyzed. Selected powders were additionally investigated by magnetic force microscopy (MFM) and X-ray diffraction techniques. Photocatalytic ability of Fe doped TiO2 powders was evaluated by means of cholesteryl hemisuccinate (CHOL) degradation experiment conducted under the 30 min irradiation of simulated solar light.
Channeling technique to make nanoscale ion beams
NASA Astrophysics Data System (ADS)
Biryukov, V. M.; Bellucci, S.; Guidi, V.
2005-04-01
Particle channeling in a bent crystal lattice has led to an efficient instrument for beam steering at accelerators [Biryukov et al., Crystal Channeling and its Application at High Energy Accelerators, Springer, Berlin, 1997], demonstrated from MeV to TeV energies. In particular, crystal focusing of high-energy protons to micron size has been demonstrated at IHEP with the results well in match with Lindhard (critical angle) prediction. Channeling in crystal microstructures has been proposed as a unique source of a microbeam of high-energy particles [Bellucci et al., Phys. Rev. ST Accel. Beams 6 (2003) 033502]. Channeling in nanostructures (single-wall and multi-wall nanotubes) offers the opportunities to produce ion beams on nanoscale. Particles channeled in a nanotube (with typical diameter of about 1 nm) are trapped in two dimensions and can be steered (deflected, focused) with the efficiency similar to that of crystal channeling or better. This technique has been a subject of computer simulations, with experimental efforts under way in several high-energy labs, including IHEP. We present the theoretical outlook for making channeling-based nanoscale ion beams and report the experience with crystal-focused microscale proton beams.
Viscosity of dilute suspensions of rodlike particles: A numerical simulation method
NASA Astrophysics Data System (ADS)
Yamamoto, Satoru; Matsuoka, Takaaki
1994-02-01
The recently developed simulation method, named as the particle simulation method (PSM), is extended to predict the viscosity of dilute suspensions of rodlike particles. In this method a rodlike particle is modeled by bonded spheres. Each bond has three types of springs for stretching, bending, and twisting deformation. The rod model can therefore deform by changing the bond distance, bond angle, and torsion angle between paired spheres. The rod model can represent a variety of rigidity by modifying the bond parameters related to Young's modulus and the shear modulus of the real particle. The time evolution of each constituent sphere of the rod model is followed by molecular-dynamics-type approach. The intrinsic viscosity of a suspension of rodlike particles is derived from calculating an increased energy dissipation for each sphere of the rod model in a viscous fluid. With and without deformation of the particle, the motion of the rodlike particle was numerically simulated in a three-dimensional simple shear flow at a low particle Reynolds number and without Brownian motion of particles. The intrinsic viscosity of the suspension of rodlike particles was investigated on orientation angle, rotation orbit, deformation, and aspect ratio of the particle. For the rigid rodlike particle, the simulated rotation orbit compared extremely well with theoretical one which was obtained for a rigid ellipsoidal particle by use of Jeffery's equation. The simulated dependence of the intrinsic viscosity on various factors was also identical with that of theories for suspensions of rigid rodlike particles. For the flexible rodlike particle, the rotation orbit could be obtained by the particle simulation method and it was also cleared that the intrinsic viscosity decreased as occurring of recoverable deformation of the rodlike particle induced by flow.
Development of optical monitor of alpha radiations based on CR-39.
Joshirao, Pranav M; Shin, Jae Won; Vyas, Chirag K; Kulkarni, Atul D; Kim, Hojoong; Kim, Taesung; Hong, Seung-Woo; Manchanda, Vijay K
2013-11-01
Fukushima accident has highlighted the need to intensify efforts to develop sensitive detectors to monitor the release of alpha emitting radionuclides in the environment caused by the meltdown of the discharged spent fuel. Conventionally, proportional counting, scintillation counting and alpha spectrometry are employed to assay the alpha emitting radionuclides but these techniques are difficult to be configured for online operations. Solid State Nuclear Track Detectors (SSNTDs) offer an alternative off line sensitive technique to measure alpha emitters as well as fissile radionuclides at ultra-trace level in the environment. Recently, our group has reported the first ever attempt to use reflectance based fiber optic sensor (FOS) to quantify the alpha radiations emitted from (232)Th. In the present work, an effort has been made to develop an online FOS to monitor alpha radiations emitted from (241)Am source employing CR-39 as detector. Here, we report the optical response of CR-39 (on exposure to alpha radiations) employing techniques such as Atomic Force Microscopy (AFM) and Reflectance Spectroscopy. In the present work GEANT4 simulation of transport of alpha particles in the detector has also been carried out. Simulation includes validation test wherein the projected ranges of alpha particles in the air, polystyrene and CR-39 were calculated and were found to agree with the literature values. An attempt has been further made to compute the fluence as a function of the incidence angle and incidence energy of alphas. There was an excellent correlation in experimentally observed track density with the simulated fluence. The present work offers a novel approach to design an online CR-39 based fiber optic sensor (CRFOS) to measure the release of nanogram quantity of (241)Am in the environment. © 2013 Elsevier Ltd. All rights reserved.
PIC Simulation of Laser Plasma Interactions with Temporal Bandwidths
NASA Astrophysics Data System (ADS)
Tsung, Frank; Weaver, J.; Lehmberg, R.
2015-11-01
We are performing particle-in-cell simulations using the code OSIRIS to study the effects of laser plasma interactions in the presence of temperal bandwidths under conditions relevant to current and future shock ignition experiments on the NIKE laser. Our simulations show that, for sufficiently large bandwidth, the saturation level, and the distribution of hot electrons, can be effected by the addition of temporal bandwidths (which can be accomplished in experiments using smoothing techniques such as SSD or ISI). We will show that temporal bandwidth along play an important role in the control of LPI's in these lasers and discuss future directions. This work is conducted under the auspices of NRL.
Vitti, Antonella; Nuzzaci, Maria; Condelli, Valentina; Piazzolla, Pasquale
2014-01-01
Edible vaccines must survive digestive process and preserve the specific structure of the antigenic peptide to elicit effective immune response. The stability of a protein to digestive process can be predicted by subjecting it to the in vitro assay with simulated gastric fluid (SGF) and simulated intestinal fluid (SIF). Here, we describe the protocol of producing and using chimeric Cucumber mosaic virus (CMV) displaying Hepatitis C virus (HCV) derived peptide (R9) in double copy as an oral vaccine. Its stability after treatment with SGF and SIF and the preservation of the antigenic properties were verified by SDS-PAGE and immuno western blot techniques.
Rees, Terry F.
1990-01-01
Colloidal materials, dispersed phases with dimensions between 0.001 and 1 μm, are potential transport media for a variety of contaminants in surface and ground water. Characterization of these colloids, and identification of the parameters that control their movement, are necessary before transport simulations can be attempted. Two techniques that can be used to determine the particle-size distribution of colloidal materials suspended in natural waters are compared. Photon correlation Spectroscopy (PCS) utilizes the Doppler frequency shift of photons scattered off particles undergoing Brownian motion to determine the size of colloids suspended in water. Photosedimentation analysis (PSA) measures the time-dependent change in optical density of a suspension of colloidal particles undergoing centrifugation. A description of both techniques, important underlying assumptions, and limitations are given. Results for a series of river water samples show that the colloid-size distribution means are statistically identical as determined by both techniques. This also is true of the mass median diameter (MMD), even though MMD values determined by PSA are consistently smaller than those determined by PCS. Because of this small negative bias, the skew parameters for the distributions are generally smaller for the PCS-determined distributions than for the PSA-determined distributions. Smaller polydispersity indices for the distributions are also determined by PCS.
NASA Astrophysics Data System (ADS)
Ivkin, N.; Liu, Z.; Yang, L. F.; Kumar, S. S.; Lemson, G.; Neyrinck, M.; Szalay, A. S.; Braverman, V.; Budavari, T.
2018-04-01
Cosmological N-body simulations play a vital role in studying models for the evolution of the Universe. To compare to observations and make a scientific inference, statistic analysis on large simulation datasets, e.g., finding halos, obtaining multi-point correlation functions, is crucial. However, traditional in-memory methods for these tasks do not scale to the datasets that are forbiddingly large in modern simulations. Our prior paper (Liu et al., 2015) proposes memory-efficient streaming algorithms that can find the largest halos in a simulation with up to 109 particles on a small server or desktop. However, this approach fails when directly scaling to larger datasets. This paper presents a robust streaming tool that leverages state-of-the-art techniques on GPU boosting, sampling, and parallel I/O, to significantly improve performance and scalability. Our rigorous analysis of the sketch parameters improves the previous results from finding the centers of the 103 largest halos (Liu et al., 2015) to ∼ 104 - 105, and reveals the trade-offs between memory, running time and number of halos. Our experiments show that our tool can scale to datasets with up to ∼ 1012 particles while using less than an hour of running time on a single GPU Nvidia GTX 1080.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Küchlin, Stephan, E-mail: kuechlin@ifd.mavt.ethz.ch; Jenny, Patrick
2017-01-01
A major challenge for the conventional Direct Simulation Monte Carlo (DSMC) technique lies in the fact that its computational cost becomes prohibitive in the near continuum regime, where the Knudsen number (Kn)—characterizing the degree of rarefaction—becomes small. In contrast, the Fokker–Planck (FP) based particle Monte Carlo scheme allows for computationally efficient simulations of rarefied gas flows in the low and intermediate Kn regime. The Fokker–Planck collision operator—instead of performing binary collisions employed by the DSMC method—integrates continuous stochastic processes for the phase space evolution in time. This allows for time step and grid cell sizes larger than the respective collisionalmore » scales required by DSMC. Dynamically switching between the FP and the DSMC collision operators in each computational cell is the basis of the combined FP-DSMC method, which has been proven successful in simulating flows covering the whole Kn range. Until recently, this algorithm had only been applied to two-dimensional test cases. In this contribution, we present the first general purpose implementation of the combined FP-DSMC method. Utilizing both shared- and distributed-memory parallelization, this implementation provides the capability for simulations involving many particles and complex geometries by exploiting state of the art computer cluster technologies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fogarty, Aoife C., E-mail: fogarty@mpip-mainz.mpg.de; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de; Kremer, Kurt, E-mail: kremer@mpip-mainz.mpg.de
A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydrationmore » shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.« less
Li, Zulai; Wang, Pengfei; Shan, Quan; Jiang, Yehua; Wei, He; Tan, Jun
2018-06-11
In this work, tungsten carbide particles (WC p , spherical and irregular particles)-reinforced iron matrix composites were manufactured utilizing a liquid sintering technique. The mechanical properties and the fracture mechanism of WC p /iron matrix composites were investigated theoretically and experimentally. The crack schematic diagram and fracture simulation diagram of WC p /iron matrix composites were summarized, indicating that the micro-crack was initiated both from the interface for spherical and irregular WC p /iron matrix composites. However, irregular WC p had a tendency to form spherical WC p . The micro-cracks then expanded to a wide macro-crack at the interface, leading to a final failure of the composites. In comparison with the spherical WC p , the irregular WC p were prone to break due to the stress concentration resulting in being prone to generating brittle cracking. The study on the fracture mechanisms of WC p /iron matrix composites might provide a theoretical guidance for the design and engineering application of particle reinforced composites.
Mass production of shaped particles through vortex ring freezing
An, Duo; Warning, Alex; Yancey, Kenneth G.; Chang, Chun-Ti; Kern, Vanessa R.; Datta, Ashim K.; Steen, Paul H.; Luo, Dan; Ma, Minglin
2016-01-01
A vortex ring is a torus-shaped fluidic vortex. During its formation, the fluid experiences a rich variety of intriguing geometrical intermediates from spherical to toroidal. Here we show that these constantly changing intermediates can be ‘frozen' at controlled time points into particles with various unusual and unprecedented shapes. These novel vortex ring-derived particles, are mass-produced by employing a simple and inexpensive electrospraying technique, with their sizes well controlled from hundreds of microns to millimetres. Guided further by theoretical analyses and a laminar multiphase fluid flow simulation, we show that this freezing approach is applicable to a broad range of materials from organic polysaccharides to inorganic nanoparticles. We demonstrate the unique advantages of these vortex ring-derived particles in several applications including cell encapsulation, three-dimensional cell culture, and cell-free protein production. Moreover, compartmentalization and ordered-structures composed of these novel particles are all achieved, creating opportunities to engineer more sophisticated hierarchical materials. PMID:27488831
Single file diffusion into a semi-infinite tube.
Farrell, Spencer G; Brown, Aidan I; Rutenberg, Andrew D
2015-11-23
We investigate single file diffusion (SFD) of large particles entering a semi-infinite tube, such as luminal diffusion of proteins into microtubules or flagella. While single-file effects have no impact on the evolution of particle density, we report significant single-file effects for individually tracked tracer particle motion. Both exact and approximate ordering statistics of particles entering semi-infinite tubes agree well with our stochastic simulations. Considering initially empty semi-infinite tubes, with particles entering at one end starting from an initial time t = 0, tracked particles are initially super-diffusive after entering the system, but asymptotically diffusive at later times. For finite time intervals, the ratio of the net displacement of individual single-file particles to the average displacement of untracked particles is reduced at early times and enhanced at later times. When each particle is numbered, from the first to enter (n = 1) to the most recent (n = N), we find good scaling collapse of this distance ratio for all n. Experimental techniques that track individual particles, or local groups of particles, such as photo-activation or photobleaching of fluorescently tagged proteins, should be able to observe these single-file effects. However, biological phenomena that depend on local concentration, such as flagellar extension or luminal enzymatic activity, should not exhibit single-file effects.
NASA Astrophysics Data System (ADS)
Liu, Jie; Wang, Wilson; Ma, Fai
2011-07-01
System current state estimation (or condition monitoring) and future state prediction (or failure prognostics) constitute the core elements of condition-based maintenance programs. For complex systems whose internal state variables are either inaccessible to sensors or hard to measure under normal operational conditions, inference has to be made from indirect measurements using approaches such as Bayesian learning. In recent years, the auxiliary particle filter (APF) has gained popularity in Bayesian state estimation; the APF technique, however, has some potential limitations in real-world applications. For example, the diversity of the particles may deteriorate when the process noise is small, and the variance of the importance weights could become extremely large when the likelihood varies dramatically over the prior. To tackle these problems, a regularized auxiliary particle filter (RAPF) is developed in this paper for system state estimation and forecasting. This RAPF aims to improve the performance of the APF through two innovative steps: (1) regularize the approximating empirical density and redraw samples from a continuous distribution so as to diversify the particles; and (2) smooth out the rather diffused proposals by a rejection/resampling approach so as to improve the robustness of particle filtering. The effectiveness of the proposed RAPF technique is evaluated through simulations of a nonlinear/non-Gaussian benchmark model for state estimation. It is also implemented for a real application in the remaining useful life (RUL) prediction of lithium-ion batteries.
Lorentz boosted frame simulation technique in Particle-in-cell methods
NASA Astrophysics Data System (ADS)
Yu, Peicheng
In this dissertation, we systematically explore the use of a simulation method for modeling laser wakefield acceleration (LWFA) using the particle-in-cell (PIC) method, called the Lorentz boosted frame technique. In the lab frame the plasma length is typically four orders of magnitude larger than the laser pulse length. Using this technique, simulations are performed in a Lorentz boosted frame in which the plasma length, which is Lorentz contracted, and the laser length, which is Lorentz expanded, are now comparable. This technique has the potential to reduce the computational needs of a LWFA simulation by more than four orders of magnitude, and is useful if there is no or negligible reflection of the laser in the lab frame. To realize the potential of Lorentz boosted frame simulations for LWFA, the first obstacle to overcome is a robust and violent numerical instability, called the Numerical Cerenkov Instability (NCI), that leads to unphysical energy exchange between relativistically drifting particles and their radiation. This leads to unphysical noise that dwarfs the real physical processes. In this dissertation, we first present a theoretical analysis of this instability, and show that the NCI comes from the unphysical coupling of the electromagnetic (EM) modes and Langmuir modes (both main and aliasing) of the relativistically drifting plasma. We then discuss the methods to eliminate them. However, the use of FFTs can lead to parallel scalability issues when there are many more cells along the drifting direction than in the transverse direction(s). We then describe an algorithm that has the potential to address this issue by using a higher order finite difference operator for the derivative in the plasma drifting direction, while using the standard second order operators in the transverse direction(s). The NCI for this algorithm is analyzed, and it is shown that the NCI can be eliminated using the same strategies that were used for the hybrid FFT/Finite Difference solver. This scheme also requires a current correction and filtering which require FFTs. However, we show that in this case the FFTs can be done locally on each parallel partition. We also describe how the use of the hybrid FFT/Finite Difference or the hybrid higher order finite difference/second order finite difference methods permit combining the Lorentz boosted frame simulation technique with another "speed up" technique, called the quasi-3D algorithm, to gain unprecedented speed up for the LWFA simulations. In the quasi-3D algorithm the fields and currents are defined on an r--z PIC grid and expanded in azimuthal harmonics. The expansion is truncated with only a few modes so it has similar computational needs of a 2D r--z PIC code. We show that NCI has similar properties in r--z as in z-x slab geometry and show that the same strategies for eliminating the NCI in Cartesian geometry can be effective for the quasi-3D algorithm leading to the possibility of unprecedented speed up. We also describe a new code called UPIC-EMMA that is based on fully spectral (FFT) solver. The new code includes implementation of a moving antenna that can launch lasers in the boosted frame. We also describe how the new hybrid algorithms were implemented into OSIRIS. Examples of LWFA using the boosted frame using both UPIC-EMMA and OSIRIS are given, including the comparisons against the lab frame results. We also describe how to efficiently obtain the boosted frame simulations data that are needed to generate the transformed lab frame data, as well as how to use a moving window in the boosted frame. The NCI is also a major issue for modeling relativistic shocks with PIC algorithm. In relativistic shock simulations two counter-propagating plasmas drifting at relativistic speeds are colliding against each other. We show that the strategies for eliminating the NCI developed in this dissertation are enabling such simulations being run for much longer simulation times, which should open a path for major advances in relativistic shock research. (Abstract shortened by ProQuest.).
NASA Astrophysics Data System (ADS)
Capecelatro, Jesse; Desjardins, Olivier; Fox, Rodney O.
2016-03-01
Simulations of strongly coupled (i.e., high-mass-loading) fluid-particle flows in vertical channels are performed with the purpose of understanding the fundamental physics of wall-bounded multiphase turbulence. The exact Reynolds-averaged (RA) equations for high-mass-loading suspensions are presented, and the unclosed terms that are retained in the context of fully developed channel flow are evaluated in an Eulerian-Lagrangian (EL) framework for the first time. A key distinction between the RA formulation presented in the current work and previous derivations of multiphase turbulence models is the partitioning of the particle velocity fluctuations into spatially correlated and uncorrelated components, used to define the components of the particle-phase turbulent kinetic energy (TKE) and granular temperature, respectively. The adaptive spatial filtering technique developed in our previous work for homogeneous flows [J. Capecelatro, O. Desjardins, and R. O. Fox, "Numerical study of collisional particle dynamics in cluster-induced turbulence," J. Fluid Mech. 747, R2 (2014)] is shown to accurately partition the particle velocity fluctuations at all distances from the wall. Strong segregation in the components of granular energy is observed, with the largest values of particle-phase TKE associated with clusters falling near the channel wall, while maximum granular temperature is observed at the center of the channel. The anisotropy of the Reynolds stresses both near the wall and far away is found to be a crucial component for understanding the distribution of the particle-phase volume fraction. In Part II of this paper, results from the EL simulations are used to validate a multiphase Reynolds-stress turbulence model that correctly predicts the wall-normal distribution of the two-phase turbulence statistics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capecelatro, Jesse, E-mail: jcaps@illinois.edu; Desjardins, Olivier; Fox, Rodney O.
Simulations of strongly coupled (i.e., high-mass-loading) fluid-particle flows in vertical channels are performed with the purpose of understanding the fundamental physics of wall-bounded multiphase turbulence. The exact Reynolds-averaged (RA) equations for high-mass-loading suspensions are presented, and the unclosed terms that are retained in the context of fully developed channel flow are evaluated in an Eulerian–Lagrangian (EL) framework for the first time. A key distinction between the RA formulation presented in the current work and previous derivations of multiphase turbulence models is the partitioning of the particle velocity fluctuations into spatially correlated and uncorrelated components, used to define the components ofmore » the particle-phase turbulent kinetic energy (TKE) and granular temperature, respectively. The adaptive spatial filtering technique developed in our previous work for homogeneous flows [J. Capecelatro, O. Desjardins, and R. O. Fox, “Numerical study of collisional particle dynamics in cluster-induced turbulence,” J. Fluid Mech. 747, R2 (2014)] is shown to accurately partition the particle velocity fluctuations at all distances from the wall. Strong segregation in the components of granular energy is observed, with the largest values of particle-phase TKE associated with clusters falling near the channel wall, while maximum granular temperature is observed at the center of the channel. The anisotropy of the Reynolds stresses both near the wall and far away is found to be a crucial component for understanding the distribution of the particle-phase volume fraction. In Part II of this paper, results from the EL simulations are used to validate a multiphase Reynolds-stress turbulence model that correctly predicts the wall-normal distribution of the two-phase turbulence statistics.« less
Research on bulbous bow optimization based on the improved PSO algorithm
NASA Astrophysics Data System (ADS)
Zhang, Sheng-long; Zhang, Bao-ji; Tezdogan, Tahsin; Xu, Le-ping; Lai, Yu-yang
2017-08-01
In order to reduce the total resistance of a hull, an optimization framework for the bulbous bow optimization was presented. The total resistance in calm water was selected as the objective function, and the overset mesh technique was used for mesh generation. RANS method was used to calculate the total resistance of the hull. In order to improve the efficiency and smoothness of the geometric reconstruction, the arbitrary shape deformation (ASD) technique was introduced to change the shape of the bulbous bow. To improve the global search ability of the particle swarm optimization (PSO) algorithm, an improved particle swarm optimization (IPSO) algorithm was proposed to set up the optimization model. After a series of optimization analyses, the optimal hull form was found. It can be concluded that the simulation based design framework built in this paper is a promising method for bulbous bow optimization.
NASA Astrophysics Data System (ADS)
Hacker, Kirsten
2014-09-01
Seed lasers are employed to improve the temporal coherence of free-electron laser (FEL) light. However, when these seed pulses are short relative to the particle bunch, the noisy, temporally incoherent radiation from the unseeded electrons can overwhelm the coherent, seeded radiation. In this paper, a technique to seed a particle bunch with an external laser is presented in which a new mechanism to improve the contrast between coherent and incoherent free electron laser radiation is employed together with a novel, simplified echo-seeding method. The concept relies on a combination of longitudinal space charge wakes and an echo-seeding technique to make a short, coherent pulse of FEL light together with noise background suppression. Several different simulation codes are used to illustrate the concept with conditions at the soft x-ray free-electron laser in Hamburg, FLASH.
Relaxometry imaging of superparamagnetic magnetite nanoparticles at ambient conditions
NASA Astrophysics Data System (ADS)
Finkler, Amit; Schmid-Lorch, Dominik; Häberle, Thomas; Reinhard, Friedemann; Zappe, Andrea; Slota, Michael; Bogani, Lapo; Wrachtrup, Jörg
We present a novel technique to image superparamagnetic iron oxide nanoparticles via their fluctuating magnetic fields. The detection is based on the nitrogen-vacancy (NV) color center in diamond, which allows optically detected magnetic resonance (ODMR) measurements on its electron spin structure. In combination with an atomic-force-microscope, this atomic-sized color center maps ambient magnetic fields in a wide frequency range from DC up to several GHz, while retaining a high spatial resolution in the sub-nanometer range. We demonstrate imaging of single 10 nm sized magnetite nanoparticles using this spin noise detection technique. By fitting simulations (Ornstein-Uhlenbeck process) to the data, we are able to infer additional information on such a particle and its dynamics, like the attempt frequency and the anisotropy constant. This is of high interest to the proposed application of magnetite nanoparticles as an alternative MRI contrast agent or to the field of particle-aided tumor hyperthermia.
NASA Astrophysics Data System (ADS)
Naqwi, Amir A.; Durst, Franz
1993-07-01
Dual-beam laser measuring techniques are now being used, not only for velocimetry, but also for simultaneous measurements of particle size and velocity in particulate two-phase flows. However, certain details of these optical techniques, such as the effect of Gaussian beam profiles on the accuracy of the measurements, need to be further explored. To implement innovative improvements, a general analytic framework is needed in which performances of various dual-beam instruments could be quantitatively studied and compared. For this purpose, the analysis of light scattering in a generalized dual-wave system is presented in this paper. The present simulation model provides a basis for studying effects of nonplanar beam structures of incident waves, taking into account arbitrary modes of polarization. A polarizer is included in the receiving optics as well. The peculiar aspects of numerical integration of scattered light over circular, rectangular, and truncated circular apertures are also considered.
Preliminary results of a prototype C-shaped PET designed for an in-beam PET system
NASA Astrophysics Data System (ADS)
Kim, Hyun-Il; Chung, Yong Hyun; Lee, Kisung; Kim, Kyeong Min; Kim, Yongkwon; Joung, Jinhun
2016-06-01
Positron emission tomography (PET) can be utilized in particle beam therapy to verify the dose distribution of the target volume as well as the accuracy of the treatment. We present an in-beam PET scanner that can be integrated into a particle beam therapy system. The proposed PET scanner consisted of 14 detector modules arranged in a C-shape to avoid blockage of the particle beam line by the detector modules. Each detector module was composed of a 9×9 array of 4.0 mm×4.0 mm×20.0 mm LYSO crystals optically coupled to four 29-mm-diameter PMTs using the photomultiplier-quadrant-sharing (PQS) technique. In this study, a Geant4 Application for Tomographic Emission (GATE) simulation study was conducted to design a C-shaped PET scanner and then experimental evaluation of the proposed design was performed. The spatial resolution and sensitivity were measured according to NEMA NU2-2007 standards and were 6.1 mm and 5.61 cps/kBq, respectively, which is in good agreement with our simulation, with an error rate of 12.0%. Taken together, our results demonstrate the feasibility of the proposed C-shaped in-beam PET system, which we expect will be useful for measuring dose distribution in particle therapy.
Numerical Simulation of Transitional, Hypersonic Flows using a Hybrid Particle-Continuum Method
NASA Astrophysics Data System (ADS)
Verhoff, Ashley Marie
Analysis of hypersonic flows requires consideration of multiscale phenomena due to the range of flight regimes encountered, from rarefied conditions in the upper atmosphere to fully continuum flow at low altitudes. At transitional Knudsen numbers there are likely to be localized regions of strong thermodynamic nonequilibrium effects that invalidate the continuum assumptions of the Navier-Stokes equations. Accurate simulation of these regions, which include shock waves, boundary and shear layers, and low-density wakes, requires a kinetic theory-based approach where no prior assumptions are made regarding the molecular distribution function. Because of the nature of these types of flows, there is much to be gained in terms of both numerical efficiency and physical accuracy by developing hybrid particle-continuum simulation approaches. The focus of the present research effort is the continued development of the Modular Particle-Continuum (MPC) method, where the Navier-Stokes equations are solved numerically using computational fluid dynamics (CFD) techniques in regions of the flow field where continuum assumptions are valid, and the direct simulation Monte Carlo (DSMC) method is used where strong thermodynamic nonequilibrium effects are present. Numerical solutions of transitional, hypersonic flows are thus obtained with increased physical accuracy relative to CFD alone, and improved numerical efficiency is achieved in comparison to DSMC alone because this more computationally expensive method is restricted to those regions of the flow field where it is necessary to maintain physical accuracy. In this dissertation, a comprehensive assessment of the physical accuracy of the MPC method is performed, leading to the implementation of a non-vacuum supersonic outflow boundary condition in particle domains, and more consistent initialization of DSMC simulator particles along hybrid interfaces. The relative errors between MPC and full DSMC results are greatly reduced as a direct result of these improvements. Next, a new parameter for detecting rotational nonequilibrium effects is proposed and shown to offer advantages over other continuum breakdown parameters, achieving further accuracy gains. Lastly, the capabilities of the MPC method are extended to accommodate multiple chemical species in rotational nonequilibrium, each of which is allowed to equilibrate independently, enabling application of the MPC method to more realistic atmospheric flows.
High-temperature LDV seed particle development
NASA Technical Reports Server (NTRS)
Frish, Michael B.; Pierce, Vicky G.
1989-01-01
The feasibility of developing a method for making monodisperse, unagglomerated spherical particles greater than 50 nm in diameter was demonstrated. Carbonaceous particles were made by pyrolyzing ethylene with a pulsed CO2 laser, thereby creating a non-equilibrium mixture of carbon, hydrogen, hydrocarbon vapors, and unpyrolyzed ethylene. Via a complex series of reactions, the carbon and hydrocarbon vapors quickly condensed into the spherical particles. By cooling and dispersing them in a supersonic expansion immediately after their creation, the hot newly-formed spheres were prevented from colliding and coalescing, thus preventing the problem of agglomeration which as plagued other investigators studying laser-simulated particle formation. The cold particles could be left suspended in the residual gases indefinitely without agglomerating. Their uniform sizes and unagglomerated nature were visualized by collecting the particles on filters that were subsequently examined using electron microscopy. It was found the mean particle size can be coarsely controlled by varying the initial ethylene pressure, and can be finely controlled by varying the fluence (energy/unit area) with which the laser irradiates the gas. The motivating application for this research was to manufacture particles that could be used as laser Doppler velocimetry (LDV) seeds in high-temperature high-speed flows. Though the particles made in this program will not evaporate until heated to about 3000 K, and thus could serve as LDV seeds in some applications, they are not ideal when the hot atmosphere is also oxidizing. In that situation, ceramic materials would be preferable. Research performed elsewhere has demonstrated that selected ceramic materials can be manufactured by laser pyrolysis of appropriate supply gases. It is anticipated that, when the same gases are used in conjunction with the rapid cooling technique, unagglomerated spherical ceramic particles can be made with little difficulty. Such particles would also be valuable to manufacturers of ceramic or abrasive products, and this technique may find its greatest commercial potential in those areas.
High-temperature LDV seed particle development
NASA Astrophysics Data System (ADS)
Frish, Michael B.; Pierce, Vicky G.
1989-05-01
The feasibility of developing a method for making monodisperse, unagglomerated spherical particles greater than 50 nm in diameter was demonstrated. Carbonaceous particles were made by pyrolyzing ethylene with a pulsed CO2 laser, thereby creating a non-equilibrium mixture of carbon, hydrogen, hydrocarbon vapors, and unpyrolyzed ethylene. Via a complex series of reactions, the carbon and hydrocarbon vapors quickly condensed into the spherical particles. By cooling and dispersing them in a supersonic expansion immediately after their creation, the hot newly-formed spheres were prevented from colliding and coalescing, thus preventing the problem of agglomeration which as plagued other investigators studying laser-simulated particle formation. The cold particles could be left suspended in the residual gases indefinitely without agglomerating. Their uniform sizes and unagglomerated nature were visualized by collecting the particles on filters that were subsequently examined using electron microscopy. It was found the mean particle size can be coarsely controlled by varying the initial ethylene pressure, and can be finely controlled by varying the fluence (energy/unit area) with which the laser irradiates the gas. The motivating application for this research was to manufacture particles that could be used as laser Doppler velocimetry (LDV) seeds in high-temperature high-speed flows. Though the particles made in this program will not evaporate until heated to about 3000 K, and thus could serve as LDV seeds in some applications, they are not ideal when the hot atmosphere is also oxidizing. In that situation, ceramic materials would be preferable. Research performed elsewhere has demonstrated that selected ceramic materials can be manufactured by laser pyrolysis of appropriate supply gases. It is anticipated that, when the same gases are used in conjunction with the rapid cooling technique, unagglomerated spherical ceramic particles can be made with little difficulty. Such particles would also be valuable to manufacturers of ceramic or abrasive products, and this technique may find its greatest commercial potential in those areas.
NASA Technical Reports Server (NTRS)
Mayo, W. T., Jr.; Smart, A. E.
1979-01-01
A laser transit anemometer measured a two-dimensional vector velocity, using the transit time of scattering particles between two focused and parallel laser beams. The objectives were: (1) the determination of the concentration levels and light scattering efficiencies of naturally occurring, submicron particles in the NASA/Ames unitary wind tunnel and (2) the evaluation based on these measured data of a laser transit anemometer with digital correlation processing for nonintrusive velocity measurement in this facility. The evaluation criteria were the speeds at which point velocity measurements could be realized with this technique (as determined from computer simulations) for given accuracy requirements.
HTS cryogenic current comparator for non-invasive sensing of charged-particle beams
NASA Astrophysics Data System (ADS)
Hao, L.; Gallop, J. C.; Macfarlane, J. C.; Carr, C.
2002-03-01
The principle of the superconducting cryogenic direct-current comparator (CCC) is applied to the non-invasive sensing of charged-particle beams (ions, electrons). With the use of HTS components it is feasible to envisage applications, for example, in precision mass spectrometry, in real-time monitoring of ion-beam implantation currents and for the determination of the Faraday fundamental constant. We have developed a novel current concentrating technique using HTS thick-film material, to increase the sensitivity of the CCC. Recent simulations and experimental measurements of the flux and current concentration ratios, frequency response and linearity of a prototype HTS-CCC operating at 77 K are described.
NASA Astrophysics Data System (ADS)
Liu, Kai; Balachandar, S.
2017-11-01
We perform a series of Euler-Lagrange direct numerical simulations (DNS) for multiphase jets and sedimenting particles. The forces the flow exerts on the particles in these two-way coupled simulations are computed using the Basset-Bousinesq-Oseen (BBO) equations. These forces do not explicitly account for particle-particle interactions, even though such pairwise interactions induced by the perturbations from neighboring particles may be important especially when the particle volume fraction is high. Such effects have been largely unaddressed in the literature. Here, we implement the Pairwise Interaction Extended Point-Particle (PIEP) model to simulate the effect of neighboring particle pairs. A simple collision model is also applied to avoid unphysical overlapping of solid spherical particles. The simulation results indicate that the PIEP model provides a more elaborative and complicated movement of the dispersed phase (droplets and particles). Office of Naval Research (ONR) Multidisciplinary University Research Initiative (MURI) project N00014-16-1-2617.
NASA Astrophysics Data System (ADS)
Perera, M. Tharanga D.
Microstructure is key to understanding rheological behaviors of flowing particulate suspensions. During the past decade, Stokesian Dynamics simulations have been the dominant method of determining suspension microstructure. Structure results obtained numerically reveal that an anisotropic structure is formed under high Peclet (Pe) number conditions. Researchers have used various experimental techniques such as small angle neutron scattering (SANS) and light scattering methods to validate microstructure. This work outlines an experimental technique based on confocal microscopy to study microstructure of a colloidal suspension in an index-matched fluid flowing in a microchannel. High resolution scans determining individual particle locations in suspensions 30-50 vol % yield quantitative results of the local microstructure in the form of the pair distribution function, g(r). From these experimentally determined g(r), the effect of shear rate, quantified by the Peclet number as a ratio of shear and Brownian stress, on the suspension viscosity and normal stress follow that seen in macroscopic rheological measurements and simulations. It is generally believed that shear thickening behavior of colloidal suspensions is driven by the formation of hydroclusters. From measurements of particle locations, hydroclusters are identified. The number of hydroclusters grows exponentially with increasing Pe, and the onset of shear thickening is driven by the increase in formation of clusters having 5-8 particles. At higher Pe, we notice the emergence of 12 or more particle clusters. The internal structure of these hydroclusters has been investigated, and there is some evidence that particles internal to hydroclusters preferentially align along the 45° and 135° axis. Beyond observations of bulk suspension behavior, the influence of boundaries on suspension microstructure is also investigated. Experiments were performed for suspensions flowing over smooth walls, made of glass coverslips, and over rough walls having a high density coating of particles. These results show that there is more order in structure near smooth boundaries while near rough boundaries the structure is similar to that found in the bulk. The relative viscosity and normal stress differences also indicate that boundaries have an effect up as far as 6 particle diameters away from the boundary. Finally, we investigate the microstructure evolvement in a model porous medium and notice that such boundary effects come into play in such real process flows. The confocal microscopy technique also provides us with the advantage of measuring structure in real process flows. We have investigated how the microstructure evolves upstream and downstream in a porous medium. We notice more structure in a high volume fraction suspension and notice anisotropic behavior at regions where shear from the wall of the posts dominate. In other cases, a mixed flow behavior is observed due to collisions between pore surfaces and other particles resulting in a deviation from flow streamlines.
A novel Kinetic Monte Carlo algorithm for Non-Equilibrium Simulations
NASA Astrophysics Data System (ADS)
Jha, Prateek; Kuzovkov, Vladimir; Grzybowski, Bartosz; Olvera de La Cruz, Monica
2012-02-01
We have developed an off-lattice kinetic Monte Carlo simulation scheme for reaction-diffusion problems in soft matter systems. The definition of transition probabilities in the Monte Carlo scheme are taken identical to the transition rates in a renormalized master equation of the diffusion process and match that of the Glauber dynamics of Ising model. Our scheme provides several advantages over the Brownian dynamics technique for non-equilibrium simulations. Since particle displacements are accepted/rejected in a Monte Carlo fashion as opposed to moving particles following a stochastic equation of motion, nonphysical movements (e.g., violation of a hard core assumption) are not possible (these moves have zero acceptance). Further, the absence of a stochastic ``noise'' term resolves the computational difficulties associated with generating statistically independent trajectories with definitive mean properties. Finally, since the timestep is independent of the magnitude of the interaction forces, much longer time-steps can be employed than Brownian dynamics. We discuss the applications of this scheme for dynamic self-assembly of photo-switchable nanoparticles and dynamical problems in polymeric systems.
Transient dynamics of vulcanian explosions and column collapse.
Clarke, A B; Voight, B; Neri, A; Macedonio, G
2002-02-21
Several analytical and numerical eruption models have provided insight into volcanic eruption behaviour, but most address plinian-type eruptions where vent conditions are quasi-steady. Only a few studies have explored the physics of short-duration vulcanian explosions with unsteady vent conditions and blast events. Here we present a technique that links unsteady vent flux of vulcanian explosions to the resulting dispersal of volcanic ejecta, using a numerical, axisymmetric model with multiple particle sizes. We use observational data from well documented explosions in 1997 at the Soufrière Hills volcano in Montserrat, West Indies, to constrain pre-eruptive subsurface initial conditions and to compare with our simulation results. The resulting simulations duplicate many features of the observed explosions, showing transitional behaviour where mass is divided between a buoyant plume and hazardous radial pyroclastic currents fed by a collapsing fountain. We find that leakage of volcanic gas from the conduit through surrounding rocks over a short period (of the order of 10 hours) or retarded exsolution can dictate the style of explosion. Our simulations also reveal the internal plume dynamics and particle-size segregation mechanisms that may occur in such eruptions.
NASA Astrophysics Data System (ADS)
Fogarty, Aoife C.; Potestio, Raffaello; Kremer, Kurt
2015-05-01
A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.
Finite time step and spatial grid effects in δf simulation of warm plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturdevant, Benjamin J., E-mail: benjamin.j.sturdevant@gmail.com; Department of Applied Mathematics, University of Colorado at Boulder, Boulder, CO 80309; Parker, Scott E.
2016-01-15
This paper introduces a technique for analyzing time integration methods used with the particle weight equations in δf method particle-in-cell (PIC) schemes. The analysis applies to the simulation of warm, uniform, periodic or infinite plasmas in the linear regime and considers the collective behavior similar to the analysis performed by Langdon for full-f PIC schemes [1,2]. We perform both a time integration analysis and spatial grid analysis for a kinetic ion, adiabatic electron model of ion acoustic waves. An implicit time integration scheme is studied in detail for δf simulations using our weight equation analysis and for full-f simulations usingmore » the method of Langdon. It is found that the δf method exhibits a CFL-like stability condition for low temperature ions, which is independent of the parameter characterizing the implicitness of the scheme. The accuracy of the real frequency and damping rate due to the discrete time and spatial schemes is also derived using a perturbative method. The theoretical analysis of numerical error presented here may be useful for the verification of simulations and for providing intuition for the design of new implicit time integration schemes for the δf method, as well as understanding differences between δf and full-f approaches to plasma simulation.« less
NASA Technical Reports Server (NTRS)
Clem, Michelle M.; Woike, Mark R.; Abdul-Aziz, Ali
2014-01-01
The Aeronautical Sciences Project under NASA's Fundamental Aeronautics Program is interested in the development of novel measurement technologies, such as optical surface measurements for the in situ health monitoring of critical constituents of the internal flow path. In situ health monitoring has the potential to detect flaws, i.e. cracks in key components, such as engine turbine disks, before the flaws lead to catastrophic failure. The present study, aims to further validate and develop an optical strain measurement technique to measure the radial growth and strain field of an already cracked disk, mimicking the geometry of a sub-scale turbine engine disk, under loaded conditions in the NASA Glenn Research Center's High Precision Rotordynamics Laboratory. The technique offers potential fault detection by imaging an applied high-contrast random speckle pattern under unloaded and loaded conditions with a CCD camera. Spinning the cracked disk at high speeds (loaded conditions) induces an external load, resulting in a radial growth of the disk of approximately 50.0-µm in the flawed region and hence, a localized strain field. When imaging the cracked disk under static conditions, the disk will be undistorted; however, during rotation the cracked region will grow radially, thus causing the applied particle pattern to be 'shifted'. The resulting particle displacements between the two images is measured using the two-dimensional cross-correlation algorithms implemented in standard Particle Image Velocimetry (PIV) software to track the disk growth, which facilitates calculation of the localized strain field. A random particle distribution is adhered onto the surface of the cracked disk and two bench top experiments are carried out to evaluate the technique's ability to measure the induced particle displacements. The disk is shifted manually using a translation stage equipped with a fine micrometer and a hotplate is used to induce thermal growth of the disk, causing the particles to become shifted. For both experiments, reference and test images are acquired before and after the induced shifts, respectively, and then processed using PIV software. The controlled manual translation of the disk resulted in detection of the particle displacements accurate to 1.75% of full scale and the thermal expansion experiment resulted in successful detection of the disk's thermal growth as compared to the calculated thermal expansion results. After validation of the technique through the induced shift experiments, the technique is implemented in the Rotordynamics Lab for preliminary assessment in a simulated engine environment. The discussion of the findings and plans for future work to improve upon the results are addressed in the paper.
Measurements of Deposition, Lung Surface Area and Lung Fluid for Simulation of Inhaled Compounds
Fröhlich, Eleonore; Mercuri, Annalisa; Wu, Shengqian; Salar-Behzadi, Sharareh
2016-01-01
Modern strategies in drug development employ in silico techniques in the design of compounds as well as estimations of pharmacokinetics, pharmacodynamics and toxicity parameters. The quality of the results depends on software algorithm, data library and input data. Compared to simulations of absorption, distribution, metabolism, excretion, and toxicity of oral drug compounds, relatively few studies report predictions of pharmacokinetics and pharmacodynamics of inhaled substances. For calculation of the drug concentration at the absorption site, the pulmonary epithelium, physiological parameters such as lung surface and distribution volume (lung lining fluid) have to be known. These parameters can only be determined by invasive techniques and by postmortem studies. Very different values have been reported in the literature. This review addresses the state of software programs for simulation of orally inhaled substances and focuses on problems in the determination of particle deposition, lung surface and of lung lining fluid. The different surface areas for deposition and for drug absorption are difficult to include directly into the simulations. As drug levels are influenced by multiple parameters the role of single parameters in the simulations cannot be identified easily. PMID:27445817
Comparison of heavy-ion- and electron-beam upset data for GaAS SRAMS. Technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flesner, L.D.; Zuleeg, R.; Kolasinski, W.A.
1992-07-16
We report the results of experiments designed to evaluate the extent to which focused electron-beam pulses simulate energetic ion upset phenomena in GaAs memory circuits fabricated by the McDonnell Douglas Astronautics Company. The results of two experimental methods were compared, irradiation by heavy-ion particle beams, and upset mapping using focused electron pulses. Linear energy transfer (LET) thresholds and upset cross sections are derived from the data for both methods. A comparison of results shows good agreement, indicating that for these circuits electron-beam pulse mapping is a viable simulation technique.
Ibrahim, Khaled Z.; Madduri, Kamesh; Williams, Samuel; ...
2013-07-18
The Gyrokinetic Toroidal Code (GTC) uses the particle-in-cell method to efficiently simulate plasma microturbulence. This paper presents novel analysis and optimization techniques to enhance the performance of GTC on large-scale machines. We introduce cell access analysis to better manage locality vs. synchronization tradeoffs on CPU and GPU-based architectures. Finally, our optimized hybrid parallel implementation of GTC uses MPI, OpenMP, and NVIDIA CUDA, achieves up to a 2× speedup over the reference Fortran version on multiple parallel systems, and scales efficiently to tens of thousands of cores.
Direct Simulation of Reentry Flows with Ionization
NASA Technical Reports Server (NTRS)
Carlson, Ann B.; Hassan, H. A.
1989-01-01
The Direct Simulation Monte Carlo (DSMC) method is applied in this paper to the study of rarefied, hypersonic, reentry flows. The assumptions and simplifications involved with the treatment of ionization, free electrons and the electric field are investigated. A new method is presented for the calculation of the electric field and handling of charged particles with DSMC. In addition, a two-step model for electron impact ionization is implemented. The flow field representing a 10 km/sec shock at an altitude of 65 km is calculated. The effects of the new modeling techniques on the calculation results are presented and discussed.
NASA Astrophysics Data System (ADS)
Ramos-Méndez, José; Schuemann, Jan; Incerti, Sebastien; Paganetti, Harald; Schulte, Reinhard; Faddegon, Bruce
2017-08-01
Flagged uniform particle splitting was implemented with two methods to improve the computational efficiency of Monte Carlo track structure simulations with TOPAS-nBio by enhancing the production of secondary electrons in ionization events. In method 1 the Geant4 kernel was modified. In method 2 Geant4 was not modified. In both methods a unique flag number assigned to each new split electron was inherited by its progeny, permitting reclassification of the split events as if produced by independent histories. Computational efficiency and accuracy were evaluated for simulations of 0.5-20 MeV protons and 1-20 MeV u-1 carbon ions for three endpoints: (1) mean of the ionization cluster size distribution, (2) mean number of DNA single-strand breaks (SSBs) and double-strand breaks (DSBs) classified with DBSCAN, and (3) mean number of SSBs and DSBs classified with a geometry-based algorithm. For endpoint (1), simulation efficiency was 3 times lower when splitting electrons generated by direct ionization events of primary particles than when splitting electrons generated by the first ionization events of secondary electrons. The latter technique was selected for further investigation. The following results are for method 2, with relative efficiencies about 4.5 times lower for method 1. For endpoint (1), relative efficiency at 128 split electrons approached maximum, increasing with energy from 47.2 ± 0.2 to 66.9 ± 0.2 for protons, decreasing with energy from 51.3 ± 0.4 to 41.7 ± 0.2 for carbon. For endpoint (2), relative efficiency increased with energy, from 20.7 ± 0.1 to 50.2 ± 0.3 for protons, 15.6 ± 0.1 to 20.2 ± 0.1 for carbon. For endpoint (3) relative efficiency increased with energy, from 31.0 ± 0.2 to 58.2 ± 0.4 for protons, 23.9 ± 0.1 to 26.2 ± 0.2 for carbon. Simulation results with and without splitting agreed within 1% (2 standard deviations) for endpoints (1) and (2), within 2% (1 standard deviation) for endpoint (3). In conclusion, standard particle splitting variance reduction techniques can be successfully implemented in Monte Carlo track structure codes.
A collision scheme for hybrid fluid-particle simulation of plasmas
NASA Astrophysics Data System (ADS)
Nguyen, Christine; Lim, Chul-Hyun; Verboncoeur, John
2006-10-01
Desorption phenomena at the wall of a tokamak can lead to the introduction of impurities at the edge of a thermonuclear plasma. In particular, the use of carbon as a constituent of the tokamak wall, as planned for ITER, requires the study of carbon and hydrocarbon transport in the plasma, including understanding of collisional interaction with the plasma. These collisions can result in new hydrocarbons, hydrogen, secondary electrons and so on. Computational modeling is a primary tool for studying these phenomena. XOOPIC [1] and OOPD1 are widely used computer modeling tools for the simulation of plasmas. Both are particle type codes. Particle simulation gives more kinetic information than fluid simulation, but more computation time is required. In order to reduce this disadvantage, hybrid simulation has been developed, and applied to the modeling of collisions. Present particle simulation tools such as XOOPIC and OODP1 employ a Monte Carlo model for the collisions between particle species and a neutral background gas defined by its temperature and pressure. In fluid-particle hybrid plasma models, collisions include combinations of particle and fluid interactions categorized by projectile-target pairing: particle-particle, particle-fluid, and fluid-fluid. For verification of this hybrid collision scheme, we compare simulation results to analytic solutions for classical plasma models. [1] Verboncoeur et al. Comput. Phys. Comm. 87, 199 (1995).
PENTACLE: Parallelized particle-particle particle-tree code for planet formation
NASA Astrophysics Data System (ADS)
Iwasawa, Masaki; Oshino, Shoichi; Fujii, Michiko S.; Hori, Yasunori
2017-10-01
We have newly developed a parallelized particle-particle particle-tree code for planet formation, PENTACLE, which is a parallelized hybrid N-body integrator executed on a CPU-based (super)computer. PENTACLE uses a fourth-order Hermite algorithm to calculate gravitational interactions between particles within a cut-off radius and a Barnes-Hut tree method for gravity from particles beyond. It also implements an open-source library designed for full automatic parallelization of particle simulations, FDPS (Framework for Developing Particle Simulator), to parallelize a Barnes-Hut tree algorithm for a memory-distributed supercomputer. These allow us to handle 1-10 million particles in a high-resolution N-body simulation on CPU clusters for collisional dynamics, including physical collisions in a planetesimal disc. In this paper, we show the performance and the accuracy of PENTACLE in terms of \\tilde{R}_cut and a time-step Δt. It turns out that the accuracy of a hybrid N-body simulation is controlled through Δ t / \\tilde{R}_cut and Δ t / \\tilde{R}_cut ˜ 0.1 is necessary to simulate accurately the accretion process of a planet for ≥106 yr. For all those interested in large-scale particle simulations, PENTACLE, customized for planet formation, will be freely available from https://github.com/PENTACLE-Team/PENTACLE under the MIT licence.
Analysis, simulation and visualization of 1D tapping via reduced dynamical models
NASA Astrophysics Data System (ADS)
Blackmore, Denis; Rosato, Anthony; Tricoche, Xavier; Urban, Kevin; Zou, Luo
2014-04-01
A low-dimensional center-of-mass dynamical model is devised as a simplified means of approximately predicting some important aspects of the motion of a vertical column comprised of a large number of particles subjected to gravity and periodic vertical tapping. This model is investigated first as a continuous dynamical system using analytical, simulation and visualization techniques. Then, by employing an approach analogous to that used to approximate the dynamics of a bouncing ball on an oscillating flat plate, it is modeled as a discrete dynamical system and analyzed to determine bifurcations and transitions to chaotic motion along with other properties. The predictions of the analysis are then compared-primarily qualitatively-with visualization and simulation results of the reduced continuous model, and ultimately with simulations of the complete system dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacFarlane, Joseph J.; Golovkin, I. E.; Woodruff, P. R.
2009-08-07
This Final Report summarizes work performed under DOE STTR Phase II Grant No. DE-FG02-05ER86258 during the project period from August 2006 to August 2009. The project, “Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments,” was led by Prism Computational Sciences (Madison, WI), and involved collaboration with subcontractors University of Nevada-Reno and Voss Scientific (Albuquerque, NM). In this project, we have: Developed and implemented a multi-dimensional, multi-frequency radiation transport model in the LSP hybrid fluid-PIC (particle-in-cell) code [1,2]. Updated the LSP code to support the use of accurate equation-of-state (EOS) tables generated by Prism’smore » PROPACEOS [3] code to compute more accurate temperatures in high energy density physics (HEDP) plasmas. Updated LSP to support the use of Prism’s multi-frequency opacity tables. Generated equation of state and opacity data for LSP simulations for several materials being used in plasma jet experimental studies. Developed and implemented parallel processing techniques for the radiation physics algorithms in LSP. Benchmarked the new radiation transport and radiation physics algorithms in LSP and compared simulation results with analytic solutions and results from numerical radiation-hydrodynamics calculations. Performed simulations using Prism radiation physics codes to address issues related to radiative cooling and ionization dynamics in plasma jet experiments. Performed simulations to study the effects of radiation transport and radiation losses due to electrode contaminants in plasma jet experiments. Updated the LSP code to generate output using NetCDF to provide a better, more flexible interface to SPECT3D [4] in order to post-process LSP output. Updated the SPECT3D code to better support the post-processing of large-scale 2-D and 3-D datasets generated by simulation codes such as LSP. Updated atomic physics modeling to provide for more comprehensive and accurate atomic databases that feed into the radiation physics modeling (spectral simulations and opacity tables). Developed polarization spectroscopy modeling techniques suitable for diagnosing energetic particle characteristics in HEDP experiments. A description of these items is provided in this report. The above efforts lay the groundwork for utilizing the LSP and SPECT3D codes in providing simulation support for DOE-sponsored HEDP experiments, such as plasma jet and fast ignition physics experiments. We believe that taken together, the LSP and SPECT3D codes have unique capabilities for advancing our understanding of the physics of these HEDP plasmas. Based on conversations early in this project with our DOE program manager, Dr. Francis Thio, our efforts emphasized developing radiation physics and atomic modeling capabilities that can be utilized in the LSP PIC code, and performing radiation physics studies for plasma jets. A relatively minor component focused on the development of methods to diagnose energetic particle characteristics in short-pulse laser experiments related to fast ignition physics. The period of performance for the grant was extended by one year to August 2009 with a one-year no-cost extension, at the request of subcontractor University of Nevada-Reno.« less
Sun, Xiao-gang; Tang, Hong; Dai, Jing-min
2008-12-01
The problem of determining the particle size range in the visible-infrared region was studied using the independent model algorithm in the total scattering technique. By the analysis and comparison of the accuracy of the inversion results for different R-R distributions, the measurement range of particle size was determined. Meanwhile, the corrected extinction coefficient was used instead of the original extinction coefficient, which could determine the measurement range of particle size with higher accuracy. Simulation experiments illustrate that the particle size distribution can be retrieved very well in the range from 0. 05 to 18 microm at relative refractive index m=1.235 in the visible-infrared spectral region, and the measurement range of particle size will vary with the varied wavelength range and relative refractive index. It is feasible to use the constrained least squares inversion method in the independent model to overcome the influence of the measurement error, and the inverse results are all still satisfactory when 1% stochastic noise is added to the value of the light extinction.
A deterministic Lagrangian particle separation-based method for advective-diffusion problems
NASA Astrophysics Data System (ADS)
Wong, Ken T. M.; Lee, Joseph H. W.; Choi, K. W.
2008-12-01
A simple and robust Lagrangian particle scheme is proposed to solve the advective-diffusion transport problem. The scheme is based on relative diffusion concepts and simulates diffusion by regulating particle separation. This new approach generates a deterministic result and requires far less number of particles than the random walk method. For the advection process, particles are simply moved according to their velocity. The general scheme is mass conservative and is free from numerical diffusion. It can be applied to a wide variety of advective-diffusion problems, but is particularly suited for ecological and water quality modelling when definition of particle attributes (e.g., cell status for modelling algal blooms or red tides) is a necessity. The basic derivation, numerical stability and practical implementation of the NEighborhood Separation Technique (NEST) are presented. The accuracy of the method is demonstrated through a series of test cases which embrace realistic features of coastal environmental transport problems. Two field application examples on the tidal flushing of a fish farm and the dynamics of vertically migrating marine algae are also presented.
Particle Number Dependence of the N-body Simulations of Moon Formation
NASA Astrophysics Data System (ADS)
Sasaki, Takanori; Hosono, Natsuki
2018-04-01
The formation of the Moon from the circumterrestrial disk has been investigated by using N-body simulations with the number N of particles limited from 104 to 105. We develop an N-body simulation code on multiple Pezy-SC processors and deploy Framework for Developing Particle Simulators to deal with large number of particles. We execute several high- and extra-high-resolution N-body simulations of lunar accretion from a circumterrestrial disk of debris generated by a giant impact on Earth. The number of particles is up to 107, in which 1 particle corresponds to a 10 km sized satellitesimal. We find that the spiral structures inside the Roche limit radius differ between low-resolution simulations (N ≤ 105) and high-resolution simulations (N ≥ 106). According to this difference, angular momentum fluxes, which determine the accretion timescale of the Moon also depend on the numerical resolution.
A contact algorithm for shell problems via Delaunay-based meshing of the contact domain
NASA Astrophysics Data System (ADS)
Kamran, K.; Rossi, R.; Oñate, E.
2013-07-01
The simulation of the contact within shells, with all of its different facets, represents still an open challenge in Computational Mechanics. Despite the effort spent in the development of techniques for the simulation of general contact problems, an all-seasons algorithm applicable to complex shell contact problems is yet to be developed. This work focuses on the solution of the contact between thin shells by using a technique derived from the particle finite element method together with a rotation-free shell triangle. The key concept is to define a discretization of the contact domain (CD) by constructing a finite element mesh of four-noded tetrahedra that describes the potential contact volume. The problem is completed by using an assumed-strain approach to define an elastic contact strain over the CD.
Spatial redistribution of nano-particles using electrokinetic micro-focuser
NASA Astrophysics Data System (ADS)
Garcia, Daniel E.; Silva, Aleidy; Ho, Chih-Ming
2007-09-01
Current microfabrication technologies rely on top-down, photolithographic techniques that are ultimately limited by the wavelength of light. While systems for nanofabrication do exist, they frequently suffer from high costs and slow processing times, creating a need for a new manufacturing paradigm. The combination of top-down and bottom-up fabrication approaches in device construction creates a new paradigm in micro- and nano-manufacturing. The pre-requisite for the realization of the manufacturing paradigm relies on the manipulation of molecules in a deterministic and controlled manner. The use of AC electrokinetic forces, such as dielectrophoresis (DEP) and AC electroosmosis, is a promising technology for manipulating nano-sized particle in a parallel fashion. A three-electrode micro-focusing system was designed to expoit this forces in order to control the spatial distribution of nano-particles in different frequency ranges. Thus far, we have demonstrated the ability to concentrate 40 nm and 300 nm diameter particles using a 50 μm diameter focusing system. AC electroosmotic motion of the nano-particles was observed while using low frequencies (in a range of 30 Hz - 1 KHz). By using different frequencies and changing the ground location, we have manipulated the nano-particles into circular band structures with different width, and focused the nanoparticles into circular spots with different diameters. Currently, we are in the progress of optimizing the operation parameters (e.g. frequency and AC voltages) by using the technique of particle image velocimetry (PIV). In the future, design of different electrode geometries and the numerical simulation of electric field distribution will be carried out to manipulate the nano-particles into a variety of geometries.
EMPHASIS/Nevada UTDEM user guide. Version 2.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, C. David; Seidel, David Bruce; Pasik, Michael Francis
The Unstructured Time-Domain ElectroMagnetics (UTDEM) portion of the EMPHASIS suite solves Maxwell's equations using finite-element techniques on unstructured meshes. This document provides user-specific information to facilitate the use of the code for applications of interest. UTDEM is a general-purpose code for solving Maxwell's equations on arbitrary, unstructured tetrahedral meshes. The geometries and the meshes thereof are limited only by the patience of the user in meshing and by the available computing resources for the solution. UTDEM solves Maxwell's equations using finite-element method (FEM) techniques on tetrahedral elements using vector, edge-conforming basis functions. EMPHASIS/Nevada Unstructured Time-Domain ElectroMagnetic Particle-In-Cell (UTDEM PIC) ismore » a superset of the capabilities found in UTDEM. It adds the capability to simulate systems in which the effects of free charge are important and need to be treated in a self-consistent manner. This is done by integrating the equations of motion for macroparticles (a macroparticle is an object that represents a large number of real physical particles, all with the same position and momentum) being accelerated by the electromagnetic forces upon the particle (Lorentz force). The motion of these particles results in a current, which is a source for the fields in Maxwell's equations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, A.; Davis, A.; University of Wisconsin-Madison, Madison, WI 53706
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise tomore » extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)« less
NASA Astrophysics Data System (ADS)
Darwiche, Mahmoud Khalil M.
The research presented herein is a contribution to the understanding of the numerical modeling of fully nonlinear, transient water waves. The first part of the work involves the development of a time-domain model for the numerical generation of fully nonlinear, transient waves by a piston type wavemaker in a three-dimensional, finite, rectangular tank. A time-domain boundary-integral model is developed for simulating the evolving fluid field. A robust nonsingular, adaptive integration technique for the assembly of the boundary-integral coefficient matrix is developed and tested. A parametric finite-difference technique for calculating the fluid- particle kinematics is also developed and tested. A novel compatibility and continuity condition is implemented to minimize the effect of the singularities that are inherent at the intersections of the various Dirichlet and/or Neumann subsurfaces. Results are presented which demonstrate the accuracy and convergence of the numerical model. The second portion of the work is a study of the interaction of the numerically-generated, fully nonlinear, transient waves with a bottom-mounted, surface-piercing, vertical, circular cylinder. The numerical model developed in the first part of this dissertation is extended to include the presence of the cylinder at the centerline of the basin. The diffraction of the numerically generated waves by the cylinder is simulated, and the particle kinematics of the diffracted flow field are calculated and reported. Again, numerical results showing the accuracy and convergence of the extended model are presented.
Electron tomography and 3D molecular simulations of platinum nanocrystals
NASA Astrophysics Data System (ADS)
Florea, Ileana; Demortière, Arnaud; Petit, Christophe; Bulou, Hervé; Hirlimann, Charles; Ersen, Ovidiu
2012-07-01
This work reports on the morphology of individual platinum nanocrystals with sizes of about 5 nm. By using the electron tomography technique that gives 3D spatial selectivity, access to quantitative information in the real space was obtained. The morphology of individual nanoparticles was characterized using HAADF-STEM tomography and it was shown to be close to a truncated octahedron. Using molecular dynamics simulations, this geometrical shape was found to be the one minimizing the nanocrystal energy. Starting from the tomographic reconstruction, 3D crystallographic representations of the studied Pt nanocrystals were obtained at the nanometer scale, allowing the quantification of the relative amount of the crystallographic facets present on the particle surface.This work reports on the morphology of individual platinum nanocrystals with sizes of about 5 nm. By using the electron tomography technique that gives 3D spatial selectivity, access to quantitative information in the real space was obtained. The morphology of individual nanoparticles was characterized using HAADF-STEM tomography and it was shown to be close to a truncated octahedron. Using molecular dynamics simulations, this geometrical shape was found to be the one minimizing the nanocrystal energy. Starting from the tomographic reconstruction, 3D crystallographic representations of the studied Pt nanocrystals were obtained at the nanometer scale, allowing the quantification of the relative amount of the crystallographic facets present on the particle surface. Electronic supplementary information (ESI) available. See DOI: 10.1039/c2nr30990d
Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Wada, Takao
2014-07-01
A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.
NASA Technical Reports Server (NTRS)
Wolf, David A.; Schwarz, Ray P.
1992-01-01
Measurements were taken of the path of a simulated typical tissue segment or 'particle' within a rotating fluid as a function of gravitational strength, fluid rotation rate, particle sedimentation rate, and particle initial position. Parameters were examined within the useful range for tissue culture in the NASA rotating wall culture vessels. The particle moves along a nearly circular path through the fluid (as observed from the rotating reference frame of the fluid) at the same speed as its linear terminal sedimentation speed for the external gravitational field. This gravitationally induced motion causes an increasing deviation of the particle from its original position within the fluid for a decreased rotational rate, for a more rapidly sedimenting particle, and for an increased gravitational strength. Under low gravity conditions (less than 0.1 G), the particle's motion through the fluid and its deviation from its original position become negligible. Under unit gravity conditions, large distortions (greater than 0.25 inch) occur even for particles of slow sedimentation rate (less than 1.0 cm/sec). The particle's motion is nearly independent of the particle's initial position. Comparison with mathematically predicted particle paths show that a significant error in the mathematically predicted path occurs for large particle deviations. This results from a geometric approximation and numerically accumulating error in the mathematical technique.
Shock Interaction with Random Spherical Particle Beds
NASA Astrophysics Data System (ADS)
Neal, Chris; Mehta, Yash; Salari, Kambiz; Jackson, Thomas L.; Balachandar, S. "Bala"; Thakur, Siddharth
2016-11-01
In this talk we present results on fully resolved simulations of shock interaction with randomly distributed bed of particles. Multiple simulations were carried out by varying the number of particles to isolate the effect of volume fraction. Major focus of these simulations was to understand 1) the effect of the shockwave and volume fraction on the forces experienced by the particles, 2) the effect of particles on the shock wave, and 3) fluid mediated particle-particle interactions. Peak drag force for particles at different volume fractions show a downward trend as the depth of the bed increased. This can be attributed to dissipation of energy as the shockwave travels through the bed of particles. One of the fascinating observations from these simulations was the fluctuations in different quantities due to presence of multiple particles and their random distribution. These are large simulations with hundreds of particles resulting in large amount of data. We present statistical analysis of the data and make relevant observations. Average pressure in the computational domain is computed to characterize the strengths of the reflected and transmitted waves. We also present flow field contour plots to support our observations. U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.
Pushing down the low-mass halo concentration frontier with the Lomonosov cosmological simulations
NASA Astrophysics Data System (ADS)
Pilipenko, Sergey V.; Sánchez-Conde, Miguel A.; Prada, Francisco; Yepes, Gustavo
2017-12-01
We introduce the Lomonosov suite of high-resolution N-body cosmological simulations covering a full box of size 32 h-1 Mpc with low-mass resolution particles (2 × 107 h-1 M⊙) and three zoom-in simulations of overdense, underdense and mean density regions at much higher particle resolution (4 × 104 h-1 M⊙). The main purpose of this simulation suite is to extend the concentration-mass relation of dark matter haloes down to masses below those typically available in large cosmological simulations. The three different density regions available at higher resolution provide a better understanding of the effect of the local environment on halo concentration, known to be potentially important for small simulation boxes and small halo masses. Yet, we find the correction to be small in comparison with the scatter of halo concentrations. We conclude that zoom simulations, despite their limited representativity of the volume of the Universe, can be effectively used for the measurement of halo concentrations at least at the halo masses probed by our simulations. In any case, after a precise characterization of this effect, we develop a robust technique to extrapolate the concentration values found in zoom simulations to larger volumes with greater accuracy. Altogether, Lomonosov provides a measure of the concentration-mass relation in the halo mass range 107-1010 h-1 M⊙ with superb halo statistics. This work represents a first important step to measure halo concentrations at intermediate, yet vastly unexplored halo mass scales, down to the smallest ones. All Lomonosov data and files are public for community's use.
Hybrid x-space: a new approach for MPI reconstruction.
Tateo, A; Iurino, A; Settanni, G; Andrisani, A; Stifanelli, P F; Larizza, P; Mazzia, F; Mininni, R M; Tangaro, S; Bellotti, R
2016-06-07
Magnetic particle imaging (MPI) is a new medical imaging technique capable of recovering the distribution of superparamagnetic particles from their measured induced signals. In literature there are two main MPI reconstruction techniques: measurement-based (MB) and x-space (XS). The MB method is expensive because it requires a long calibration procedure as well as a reconstruction phase that can be numerically costly. On the other side, the XS method is simpler than MB but the exact knowledge of the field free point (FFP) motion is essential for its implementation. Our simulation work focuses on the implementation of a new approach for MPI reconstruction: it is called hybrid x-space (HXS), representing a combination of the previous methods. Specifically, our approach is based on XS reconstruction because it requires the knowledge of the FFP position and velocity at each time instant. The difference with respect to the original XS formulation is how the FFP velocity is computed: we estimate it from the experimental measurements of the calibration scans, typical of the MB approach. Moreover, a compressive sensing technique is applied in order to reduce the calibration time, setting a fewer number of sampling positions. Simulations highlight that HXS and XS methods give similar results. Furthermore, an appropriate use of compressive sensing is crucial for obtaining a good balance between time reduction and reconstructed image quality. Our proposal is suitable for open geometry configurations of human size devices, where incidental factors could make the currents, the fields and the FFP trajectory irregular.