Computer simulation of surface and film processes
NASA Technical Reports Server (NTRS)
Tiller, W. A.; Halicioglu, M. T.
1983-01-01
Adequate computer methods, based on interactions between discrete particles, provide information leading to an atomic level understanding of various physical processes. The success of these simulation methods, however, is related to the accuracy of the potential energy function representing the interactions among the particles. The development of a potential energy function for crystalline SiO2 forms that can be employed in lengthy computer modelling procedures was investigated. In many of the simulation methods which deal with discrete particles, semiempirical two body potentials were employed to analyze energy and structure related properties of the system. Many body interactions are required for a proper representation of the total energy for many systems. Many body interactions for simulations based on discrete particles are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Peiyuan; Brown, Timothy; Fullmer, William D.
Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less
NASA Astrophysics Data System (ADS)
Wu, Yueqian; Yang, Minglin; Sheng, Xinqing; Ren, Kuan Fang
2015-05-01
Light scattering properties of absorbing particles, such as the mineral dusts, attract a wide attention due to its importance in geophysical and environment researches. Due to the absorbing effect, light scattering properties of particles with absorption differ from those without absorption. Simple shaped absorbing particles such as spheres and spheroids have been well studied with different methods but little work on large complex shaped particles has been reported. In this paper, the surface Integral Equation (SIE) with Multilevel Fast Multipole Algorithm (MLFMA) is applied to study scattering properties of large non-spherical absorbing particles. SIEs are carefully discretized with piecewise linear basis functions on triangle patches to model whole surface of the particle, hence computation resource needs increase much more slowly with the particle size parameter than the volume discretized methods. To improve further its capability, MLFMA is well parallelized with Message Passing Interface (MPI) on distributed memory computer platform. Without loss of generality, we choose the computation of scattering matrix elements of absorbing dust particles as an example. The comparison of the scattering matrix elements computed by our method and the discrete dipole approximation method (DDA) for an ellipsoid dust particle shows that the precision of our method is very good. The scattering matrix elements of large ellipsoid dusts with different aspect ratios and size parameters are computed. To show the capability of the presented algorithm for complex shaped particles, scattering by asymmetry Chebyshev particle with size parameter larger than 600 of complex refractive index m = 1.555 + 0.004 i and different orientations are studied.
Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane
2017-06-21
Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane
Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less
2012-01-01
The fast and accurate computation of the electric forces that drive the motion of charged particles at the nanometer scale represents a computational challenge. For this kind of system, where the discrete nature of the charges cannot be neglected, boundary element methods (BEM) represent a better approach than finite differences/finite elements methods. In this article, we compare two different BEM approaches to a canonical electrostatic problem in a three-dimensional space with inhomogeneous dielectrics, emphasizing their suitability for particle-based simulations: the iterative method proposed by Hoyles et al. and the Induced Charge Computation introduced by Boda et al. PMID:22338640
Berti, Claudio; Gillespie, Dirk; Eisenberg, Robert S; Fiegna, Claudio
2012-02-16
The fast and accurate computation of the electric forces that drive the motion of charged particles at the nanometer scale represents a computational challenge. For this kind of system, where the discrete nature of the charges cannot be neglected, boundary element methods (BEM) represent a better approach than finite differences/finite elements methods. In this article, we compare two different BEM approaches to a canonical electrostatic problem in a three-dimensional space with inhomogeneous dielectrics, emphasizing their suitability for particle-based simulations: the iterative method proposed by Hoyles et al. and the Induced Charge Computation introduced by Boda et al.
On the computational aspects of comminution in discrete element method
NASA Astrophysics Data System (ADS)
Chaudry, Mohsin Ali; Wriggers, Peter
2018-04-01
In this paper, computational aspects of crushing/comminution of granular materials are addressed. For crushing, maximum tensile stress-based criterion is used. Crushing model in discrete element method (DEM) is prone to problems of mass conservation and reduction in critical time step. The first problem is addressed by using an iterative scheme which, depending on geometric voids, recovers mass of a particle. In addition, a global-local framework for DEM problem is proposed which tends to alleviate the local unstable motion of particles and increases the computational efficiency.
Discrete particle noise in a nonlinearly saturated plasma
NASA Astrophysics Data System (ADS)
Jenkins, Thomas; Lee, W. W.
2006-04-01
Understanding discrete particle noise in an equilibrium plasma has been an important topic since the early days of particle-in- cell (PIC) simulation [1]. In this paper, particle noise in a nonlinearly saturated system is investigated. We investigate the usefulness of the fluctuation-dissipation theorem (FDT) in a regime where drift instabilities are nonlinearly saturated. We obtain excellent agreement between the simulation results and our theoretical predictions of the noise properties. It is found that discrete particle noise always enhances the particle and thermal transport in the plasma, in agreement with the second law of thermodynamics. [1] C.K. Birdsall and A.B. Langdon, Plasma Physics via Computer Simulation, McGraw-Hill, New York (1985).
NASA Astrophysics Data System (ADS)
Senegačnik, Jure; Tavčar, Gregor; Katrašnik, Tomaž
2015-03-01
The paper presents a computationally efficient method for solving the time dependent diffusion equation in a granule of the Li-ion battery's granular solid electrode. The method, called Discrete Temporal Convolution method (DTC), is based on a discrete temporal convolution of the analytical solution of the step function boundary value problem. This approach enables modelling concentration distribution in the granular particles for arbitrary time dependent exchange fluxes that do not need to be known a priori. It is demonstrated in the paper that the proposed method features faster computational times than finite volume/difference methods and Padé approximation at the same accuracy of the results. It is also demonstrated that all three addressed methods feature higher accuracy compared to the quasi-steady polynomial approaches when applied to simulate the current densities variations typical for mobile/automotive applications. The proposed approach can thus be considered as one of the key innovative methods enabling real-time capability of the multi particle electrochemical battery models featuring spatial and temporal resolved particle concentration profiles.
Ji, S.; Hanes, D.M.; Shen, H.H.
2009-01-01
In this study, we report a direct comparison between a physical test and a computer simulation of rapidly sheared granular materials. An annular shear cell experiment was conducted. All parameters were kept the same between the physical and the computational systems to the extent possible. Artificially softened particles were used in the simulation to reduce the computational time to a manageable level. Sensitivity study on the particle stiffness ensured such artificial modification was acceptable. In the experiment, a range of normal stress was applied to a given amount of particles sheared in an annular trough with a range of controlled shear speed. Two types of particles, glass and Delrin, were used in the experiment. Qualitatively, the required torque to shear the materials under different rotational speed compared well with those in the physical experiments for both the glass and the Delrin particles. However, the quantitative discrepancies between the measured and simulated shear stresses were nearly a factor of two. Boundary conditions, particle size distribution, particle damping and friction, including a sliding and rolling, contact force model, were examined to determine their effects on the computational results. It was found that of the above, the rolling friction between particles had the most significant effect on the macro stress level. This study shows that discrete element simulation is a viable method for engineering design for granular material systems. Particle level information is needed to properly conduct these simulations. However, not all particle level information is equally important in the study regime. Rolling friction, which is not commonly considered in many discrete element models, appears to play an important role. ?? 2009 Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw
2002-01-01
The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.
A minimally-resolved immersed boundary model for reaction-diffusion problems
NASA Astrophysics Data System (ADS)
Pal Singh Bhalla, Amneet; Griffith, Boyce E.; Patankar, Neelesh A.; Donev, Aleksandar
2013-12-01
We develop an immersed boundary approach to modeling reaction-diffusion processes in dispersions of reactive spherical particles, from the diffusion-limited to the reaction-limited setting. We represent each reactive particle with a minimally-resolved "blob" using many fewer degrees of freedom per particle than standard discretization approaches. More complicated or more highly resolved particle shapes can be built out of a collection of reactive blobs. We demonstrate numerically that the blob model can provide an accurate representation at low to moderate packing densities of the reactive particles, at a cost not much larger than solving a Poisson equation in the same domain. Unlike multipole expansion methods, our method does not require analytically computed Green's functions, but rather, computes regularized discrete Green's functions on the fly by using a standard grid-based discretization of the Poisson equation. This allows for great flexibility in implementing different boundary conditions, coupling to fluid flow or thermal transport, and the inclusion of other effects such as temporal evolution and even nonlinearities. We develop multigrid-based preconditioners for solving the linear systems that arise when using implicit temporal discretizations or studying steady states. In the diffusion-limited case the resulting linear system is a saddle-point problem, the efficient solution of which remains a challenge for suspensions of many particles. We validate our method by comparing to published results on reaction-diffusion in ordered and disordered suspensions of reactive spheres.
NASA Astrophysics Data System (ADS)
Zohdi, T. I.
2016-03-01
In industry, particle-laden fluids, such as particle-functionalized inks, are constructed by adding fine-scale particles to a liquid solution, in order to achieve desired overall properties in both liquid and (cured) solid states. However, oftentimes undesirable particulate agglomerations arise due to some form of mutual-attraction stemming from near-field forces, stray electrostatic charges, process ionization and mechanical adhesion. For proper operation of industrial processes involving particle-laden fluids, it is important to carefully breakup and disperse these agglomerations. One approach is to target high-frequency acoustical pressure-pulses to breakup such agglomerations. The objective of this paper is to develop a computational model and corresponding solution algorithm to enable rapid simulation of the effect of acoustical pulses on an agglomeration composed of a collection of discrete particles. Because of the complex agglomeration microstructure, containing gaps and interfaces, this type of system is extremely difficult to mesh and simulate using continuum-based methods, such as the finite difference time domain or the finite element method. Accordingly, a computationally-amenable discrete element/discrete ray model is developed which captures the primary physical events in this process, such as the reflection and absorption of acoustical energy, and the induced forces on the particulate microstructure. The approach utilizes a staggered, iterative solution scheme to calculate the power transfer from the acoustical pulse to the particles and the subsequent changes (breakup) of the pulse due to the particles. Three-dimensional examples are provided to illustrate the approach.
Coupled multipolar interactions in small-particle metallic clusters.
Pustovit, Vitaly N; Sotelo, Juan A; Niklasson, Gunnar A
2002-03-01
We propose a new formalism for computing the optical properties of small clusters of particles. It is a generalization of the coupled dipole-dipole particle-interaction model and allows one in principle to take into account all multipolar interactions in the long-wavelength limit. The method is illustrated by computations of the optical properties of N = 6 particle clusters for different multipolar approximations. We examine the effect of separation between particles and compare the optical spectra with the discrete-dipole approximation and the generalized Mie theory.
BlazeDEM3D-GPU A Large Scale DEM simulation code for GPUs
NASA Astrophysics Data System (ADS)
Govender, Nicolin; Wilke, Daniel; Pizette, Patrick; Khinast, Johannes
2017-06-01
Accurately predicting the dynamics of particulate materials is of importance to numerous scientific and industrial areas with applications ranging across particle scales from powder flow to ore crushing. Computational discrete element simulations is a viable option to aid in the understanding of particulate dynamics and design of devices such as mixers, silos and ball mills, as laboratory scale tests comes at a significant cost. However, the computational time required to simulate an industrial scale simulation which consists of tens of millions of particles can take months to complete on large CPU clusters, making the Discrete Element Method (DEM) unfeasible for industrial applications. Simulations are therefore typically restricted to tens of thousands of particles with highly detailed particle shapes or a few million of particles with often oversimplified particle shapes. However, a number of applications require accurate representation of the particle shape to capture the macroscopic behaviour of the particulate system. In this paper we give an overview of the recent extensions to the open source GPU based DEM code, BlazeDEM3D-GPU, that can simulate millions of polyhedra and tens of millions of spheres on a desktop computer with a single or multiple GPUs.
Ramos-Infante, Samuel Jesús; Ten-Esteve, Amadeo; Alberich-Bayarri, Angel; Pérez, María Angeles
2018-01-01
This paper proposes a discrete particle model based on the random-walk theory for simulating cement infiltration within open-cell structures to prevent osteoporotic proximal femur fractures. Model parameters consider the cement viscosity (high and low) and the desired direction of injection (vertical and diagonal). In vitro and in silico characterizations of augmented open-cell structures validated the computational model and quantified the improved mechanical properties (Young's modulus) of the augmented specimens. The cement injection pattern was successfully predicted in all the simulated cases. All the augmented specimens exhibited enhanced mechanical properties computationally and experimentally (maximum improvements of 237.95 ± 12.91% and 246.85 ± 35.57%, respectively). The open-cell structures with high porosity fraction showed a considerable increase in mechanical properties. Cement augmentation in low porosity fraction specimens resulted in a lesser increase in mechanical properties. The results suggest that the proposed discrete particle model is adequate for use as a femoroplasty planning framework.
Numerical Experiments on Advective Transport in Large Three-Dimensional Discrete Fracture Networks
NASA Astrophysics Data System (ADS)
Makedonska, N.; Painter, S. L.; Karra, S.; Gable, C. W.
2013-12-01
Modeling of flow and solute transport in discrete fracture networks is an important approach for understanding the migration of contaminants in impermeable hard rocks such as granite, where fractures provide dominant flow and transport pathways. The discrete fracture network (DFN) model attempts to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. An integrated DFN meshing [1], flow, and particle tracking [2] simulation capability that enables accurate flow and particle tracking simulation on large DFNs has recently been developed. The new capability has been used in numerical experiments on advective transport in large DFNs with tens of thousands of fractures and millions of computational cells. The modeling procedure starts from the fracture network generation using a stochastic model derived from site data. A high-quality computational mesh is then generated [1]. Flow is then solved using the highly parallel PFLOTRAN [3] code. PFLOTRAN uses the finite volume approach, which is locally mass conserving and thus eliminates mass balance problems during particle tracking. The flow solver provides the scalar fluxes on each control volume face. From the obtained fluxes the Darcy velocity is reconstructed for each node in the network [4]. Velocities can then be continuously interpolated to any point in the domain of interest, thus enabling random walk particle tracking. In order to describe the flow field on fractures intersections, the control volume cells on intersections are split into four planar polygons, where each polygon corresponds to a piece of a fracture near the intersection line. Thus, computational nodes lying on fracture intersections have four associated velocities, one on each side of the intersection in each fracture plane [2]. This information is used to route particles arriving at the fracture intersection to the appropriate downstream fracture segment. Verified for small DFNs, the new simulation capability allows accurate particle tracking on more realistic representations of fractured rock sites. In the current work we focus on travel time statistics and spatial dispersion and show numerical results in DFNs of different sizes, fracture densities, and transmissivity distributions. [1] Hyman J.D., Gable C.W., Painter S.L., Automated meshing of stochastically generated discrete fracture networks, Abstract H33G-1403, 2011 AGU, San Francisco, CA, 5-9 Dec. [2] N. Makedonska, S. L. Painter, T.-L. Hsieh, Q.M. Bui, and C. W. Gable., Development and verification of a new particle tracking capability for modeling radionuclide transport in discrete fracture networks, Abstract, 2013 IHLRWM, Albuquerque, NM, Apr. 28 - May 3. [3] Lichtner, P.C., Hammond, G.E., Bisht, G., Karra, S., Mills, R.T., and Kumar, J. (2013) PFLOTRAN User's Manual: A Massively Parallel Reactive Flow Code. [4] Painter S.L., Gable C.W., Kelkar S., Pathline tracing on fully unstructured control-volume grids, Computational Geosciences, 16 (4), 2012, 1125-1134.
A discrete element method-based approach to predict the breakage of coal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, Varun; Sun, Xin; Xu, Wei
Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been determined by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments, with limited predictive capabilities for new coals and processes. Our work presents a Discrete Element Method (DEM)-based computational approach to model coal particle breakage with experimentally characterized coal physical properties. We also examined the effect of select operating parameters on the breakagemore » behavior of coal particles.« less
A discrete element method-based approach to predict the breakage of coal
Gupta, Varun; Sun, Xin; Xu, Wei; ...
2017-08-05
Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been determined by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments, with limited predictive capabilities for new coals and processes. Our work presents a Discrete Element Method (DEM)-based computational approach to model coal particle breakage with experimentally characterized coal physical properties. We also examined the effect of select operating parameters on the breakagemore » behavior of coal particles.« less
A discrete element method-based approach to predict the breakage of coal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, Varun; Sun, Xin; Xu, Wei
Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been informed by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments. However, the predictive capabilities for new coals and processes are limited. This work presents a Discrete Element Method based computational framework to predict particle size distribution resulting from the breakage of coal particles characterized by the coal’s physical properties. The effect ofmore » certain operating parameters on the breakage behavior of coal particles also is examined.« less
NASA Astrophysics Data System (ADS)
Derakhshani, S. M.; Schott, D. L.; Lodewijks, G.
2013-06-01
Dust emissions can have significant effects on the human health, environment and industry equipment. Understanding the dust generation process helps to select a suitable dust preventing approach and also is useful to evaluate the environmental impact of dust emission. To describe these processes, numerical methods such as Computational Fluid Dynamics (CFD) are widely used, however nowadays particle based methods like Discrete Element Method (DEM) allow researchers to model interaction between particles and fluid flow. In this study, air flow over a stockpile, dust emission, erosion and surface deformation of granular material in the form of stockpile are studied by using DEM and CFD as a coupled method. Two and three dimensional simulations are respectively developed for CFD and DEM methods to minimize CPU time. The standard κ-ɛ turbulence model is used in a fully developed turbulent flow. The continuous gas phase and the discrete particle phase link to each other through gas-particle void fractions and momentum transfer. In addition to stockpile deformation, dust dispersion is studied and finally the accuracy of stockpile deformation results obtained by CFD-DEM modelling will be validated by the agreement with the existing experimental data.
Particle-in-cell simulations of Hall plasma thrusters
NASA Astrophysics Data System (ADS)
Miranda, Rodrigo; Ferreira, Jose Leonardo; Martins, Alexandre
2016-07-01
Hall plasma thrusters can be modelled using particle-in-cell (PIC) simulations. In these simulations, the plasma is described by a set of equations which represent a coupled system of charged particles and electromagnetic fields. The fields are computed using a spatial grid (i.e., a discretization in space), whereas the particles can move continuously in space. Briefly, the particle and fields dynamics are computed as follows. First, forces due to electric and magnetic fields are employed to calculate the velocities and positions of particles. Next, the velocities and positions of particles are used to compute the charge and current densities at discrete positions in space. Finally, these densities are used to solve the electromagnetic field equations in the grid, which are interpolated at the position of the particles to obtain the acting forces, and restart this cycle. We will present numerical simulations using software for PIC simulations to study turbulence, wave and instabilities that arise in Hall plasma thrusters. We have sucessfully reproduced a numerical simulation of a SPT-100 Hall thruster using a two-dimensional (2D) model. In addition, we are developing a 2D model of a cylindrical Hall thruster. The results of these simulations will contribute to improve the performance of plasma thrusters to be used in Cubesats satellites currenty in development at the Plasma Laboratory at University of Brasília.
NASA Astrophysics Data System (ADS)
Son, Kwon Joong
2018-02-01
Hindering particle agglomeration and re-dispersion processes, gravitational sedimentation of suspended particles in magnetorheological (MR) fluids causes inferior performance and controllability of MR fluids in response to a user-specified magnetic field. Thus, suspension stability is one of the principal factors to be considered in synthesizing MR fluids. However, only a few computational studies have been reported so far on the sedimentation characteristics of suspended particles under gravity. In this paper, the settling dynamics of paramagnetic particles suspended in MR fluids was investigated via discrete element method (DEM) simulations. This work focuses particularly on developing accurate fluid-particle and particle-particle interaction models which can account for the influence of stabilizing surfactants on the MR fluid sedimentation. Effect of the stabilizing surfactants on interparticle interactions was incorporated into the derivation of a reliable contact-impact model for DEM computation. Also, the influence of the stabilizing additives on fluid-particle interactions was considered by incorporating Stokes drag with shape and wall correction factors into DEM formulation. The results of simulations performed for model validation purposes showed a good agreement with the published sedimentation measurement data in terms of an initial sedimentation velocity and a final sedimentation ratio.
Schroedinger's Wave Structure of Matter (WSM)
NASA Astrophysics Data System (ADS)
Wolff, Milo; Haselhurst, Geoff
2009-10-01
The puzzling electron is due to the belief that it is a discrete particle. Einstein deduced this structure was impossible since Nature does not allow the discrete particle. Clifford (1876) rejected discrete matter and suggested structures in `space'. Schroedinger, (1937) also eliminated discrete particles writing: What we observe as material bodies and forces are nothing but shapes and variations in the structure of space. Particles are just schaumkommen (appearances). He rejected wave-particle duality. Schroedinger's concept was developed by Milo Wolff and Geoff Haselhurst (SpaceAndMotion.com) using the Scalar Wave Equation to find spherical wave solutions in a 3D quantum space. This WSM, the origin of all the Natural Laws, contains all the electron's properties including the Schroedinger Equation. The origin of Newton's Law F=ma is no longer a puzzle; It originates from Mach's principle of inertia (1883) that depends on the space medium and the WSM. Carver Mead (1999) at CalTech used the WSM to design Intel micro-chips correcting errors of Maxwell's magnetic Equations. Applications of the WSM also describe matter at molecular dimensions: alloys, catalysts, biology and medicine, molecular computers and memories. See ``Schroedinger's Universe'' - at Amazon.com
Schroedinger's Wave Structure of Matter (WSM)
NASA Astrophysics Data System (ADS)
Wolff, Milo
2009-05-01
The puzzling electron is due to the belief that it is a discrete particle. Einstein deduced this structure impossible since Nature does not match the discrete particle. Clifford (1876) rejected discrete matter and suggested structures in `space'. Schroedinger, (1937) also eliminated discrete particles writing: What we observe as material bodies and forces are nothing but shapes and variations in the structure of space. Particles are just schaumkommen (appearances). He rejected wave-particle duality. Schroedinger's concept was developed by Milo Wolff and Geoff Haselhurst (http://www.SpaceAndMotion.com) using the Scalar Wave Equation to find spherical wave solutions in a 3D quantum space. This WSM is the origin of all the Natural Laws; thus it contains all the electron's properties including the Schroedinger Equation. The origin of Newton's Law F=ma is no longer a puzzle; it is shown to originate from Mach's principle of inertia (1883) that depends on the space medium. Carver Mead (1999) applied the WSM to design Intel micro-chips correcting errors of Maxwell's magnetic Equations. Applications of the WSM describe matter at molecular dimensions: alloys, catalysts, the mechanisms of biology and medicine, molecular computers and memories. See http://www.amazon.com/Schro at Amazon.com.
NASA Astrophysics Data System (ADS)
Pizette, Patrick; Govender, Nicolin; Wilke, Daniel N.; Abriak, Nor-Edine
2017-06-01
The use of the Discrete Element Method (DEM) for industrial civil engineering industrial applications is currently limited due to the computational demands when large numbers of particles are considered. The graphics processing unit (GPU) with its highly parallelized hardware architecture shows potential to enable solution of civil engineering problems using discrete granular approaches. We demonstrate in this study the pratical utility of a validated GPU-enabled DEM modeling environment to simulate industrial scale granular problems. As illustration, the flow discharge of storage silos using 8 and 17 million particles is considered. DEM simulations have been performed to investigate the influence of particle size (equivalent size for the 20/40-mesh gravel) and induced shear stress for two hopper shapes. The preliminary results indicate that the shape of the hopper significantly influences the discharge rates for the same material. Specifically, this work shows that GPU-enabled DEM modeling environments can model industrial scale problems on a single portable computer within a day for 30 seconds of process time.
2017-01-01
We report a computational fluid dynamics–discrete element method (CFD-DEM) simulation study on the interplay between mass transfer and a heterogeneous catalyzed chemical reaction in cocurrent gas-particle flows as encountered in risers. Slip velocity, axial gas dispersion, gas bypassing, and particle mixing phenomena have been evaluated under riser flow conditions to study the complex system behavior in detail. The most important factors are found to be directly related to particle cluster formation. Low air-to-solids flux ratios lead to more heterogeneous systems, where the cluster formation is more pronounced and mass transfer more influenced. Falling clusters can be partially circumvented by the gas phase, which therefore does not fully interact with the cluster particles, leading to poor gas–solid contact efficiencies. Cluster gas–solid contact efficiencies are quantified at several gas superficial velocities, reaction rates, and dilution factors in order to gain more insight regarding the influence of clustering phenomena on the performance of riser reactors. PMID:28553011
Carlos Varas, Álvaro E; Peters, E A J F; Kuipers, J A M
2017-05-17
We report a computational fluid dynamics-discrete element method (CFD-DEM) simulation study on the interplay between mass transfer and a heterogeneous catalyzed chemical reaction in cocurrent gas-particle flows as encountered in risers. Slip velocity, axial gas dispersion, gas bypassing, and particle mixing phenomena have been evaluated under riser flow conditions to study the complex system behavior in detail. The most important factors are found to be directly related to particle cluster formation. Low air-to-solids flux ratios lead to more heterogeneous systems, where the cluster formation is more pronounced and mass transfer more influenced. Falling clusters can be partially circumvented by the gas phase, which therefore does not fully interact with the cluster particles, leading to poor gas-solid contact efficiencies. Cluster gas-solid contact efficiencies are quantified at several gas superficial velocities, reaction rates, and dilution factors in order to gain more insight regarding the influence of clustering phenomena on the performance of riser reactors.
Solutions of burnt-bridge models for molecular motor transport.
Morozov, Alexander Yu; Pronina, Ekaterina; Kolomeisky, Anatoly B; Artyomov, Maxim N
2007-03-01
Transport of molecular motors, stimulated by interactions with specific links between consecutive binding sites (called "bridges"), is investigated theoretically by analyzing discrete-state stochastic "burnt-bridge" models. When an unbiased diffusing particle crosses the bridge, the link can be destroyed ("burned") with a probability p , creating a biased directed motion for the particle. It is shown that for probability of burning p=1 the system can be mapped into a one-dimensional single-particle hopping model along the periodic infinite lattice that allows one to calculate exactly all dynamic properties. For the general case of p<1 a theoretical method is developed and dynamic properties are computed explicitly. Discrete-time and continuous-time dynamics for periodic distribution of bridges and different burning dynamics are analyzed and compared. Analytical predictions are supported by extensive Monte Carlo computer simulations. Theoretical results are applied for analysis of the experiments on collagenase motor proteins.
Exact Solutions of Burnt-Bridge Models for Molecular Motor Transport
NASA Astrophysics Data System (ADS)
Morozov, Alexander; Pronina, Ekaterina; Kolomeisky, Anatoly; Artyomov, Maxim
2007-03-01
Transport of molecular motors, stimulated by interactions with specific links between consecutive binding sites (called ``bridges''), is investigated theoretically by analyzing discrete-state stochastic ``burnt-bridge'' models. When an unbiased diffusing particle crosses the bridge, the link can be destroyed (``burned'') with a probability p, creating a biased directed motion for the particle. It is shown that for probability of burning p=1 the system can be mapped into one-dimensional single-particle hopping model along the periodic infinite lattice that allows one to calculate exactly all dynamic properties. For general case of p<1 a new theoretical method is developed, and dynamic properties are computed explicitly. Discrete-time and continuous-time dynamics, periodic and random distribution of bridges and different burning dynamics are analyzed and compared. Theoretical predictions are supported by extensive Monte Carlo computer simulations. Theoretical results are applied for analysis of the experiments on collagenase motor proteins.
Solutions of burnt-bridge models for molecular motor transport
NASA Astrophysics Data System (ADS)
Morozov, Alexander Yu.; Pronina, Ekaterina; Kolomeisky, Anatoly B.; Artyomov, Maxim N.
2007-03-01
Transport of molecular motors, stimulated by interactions with specific links between consecutive binding sites (called “bridges”), is investigated theoretically by analyzing discrete-state stochastic “burnt-bridge” models. When an unbiased diffusing particle crosses the bridge, the link can be destroyed (“burned”) with a probability p , creating a biased directed motion for the particle. It is shown that for probability of burning p=1 the system can be mapped into a one-dimensional single-particle hopping model along the periodic infinite lattice that allows one to calculate exactly all dynamic properties. For the general case of p<1 a theoretical method is developed and dynamic properties are computed explicitly. Discrete-time and continuous-time dynamics for periodic distribution of bridges and different burning dynamics are analyzed and compared. Analytical predictions are supported by extensive Monte Carlo computer simulations. Theoretical results are applied for analysis of the experiments on collagenase motor proteins.
A Review of Discrete Element Method (DEM) Particle Shapes and Size Distributions for Lunar Soil
NASA Technical Reports Server (NTRS)
Lane, John E.; Metzger, Philip T.; Wilkinson, R. Allen
2010-01-01
As part of ongoing efforts to develop models of lunar soil mechanics, this report reviews two topics that are important to discrete element method (DEM) modeling the behavior of soils (such as lunar soils): (1) methods of modeling particle shapes and (2) analytical representations of particle size distribution. The choice of particle shape complexity is driven primarily by opposing tradeoffs with total number of particles, computer memory, and total simulation computer processing time. The choice is also dependent on available DEM software capabilities. For example, PFC2D/PFC3D and EDEM support clustering of spheres; MIMES incorporates superquadric particle shapes; and BLOKS3D provides polyhedra shapes. Most commercial and custom DEM software supports some type of complex particle shape beyond the standard sphere. Convex polyhedra, clusters of spheres and single parametric particle shapes such as the ellipsoid, polyellipsoid, and superquadric, are all motivated by the desire to introduce asymmetry into the particle shape, as well as edges and corners, in order to better simulate actual granular particle shapes and behavior. An empirical particle size distribution (PSD) formula is shown to fit desert sand data from Bagnold. Particle size data of JSC-1a obtained from a fine particle analyzer at the NASA Kennedy Space Center is also fitted to a similar empirical PSD function.
Exploring a Multiphysics Resolution Approach for Additive Manufacturing
NASA Astrophysics Data System (ADS)
Estupinan Donoso, Alvaro Antonio; Peters, Bernhard
2018-06-01
Metal additive manufacturing (AM) is a fast-evolving technology aiming to efficiently produce complex parts while saving resources. Worldwide, active research is being performed to solve the existing challenges of this growing technique. Constant computational advances have enabled multiscale and multiphysics numerical tools that complement the traditional physical experimentation. In this contribution, an advanced discrete-continuous concept is proposed to address the physical phenomena involved during laser powder bed fusion. The concept treats powder as discrete by the extended discrete element method, which predicts the thermodynamic state and phase change for each particle. The fluid surrounding is solved with multiphase computational fluid dynamics techniques to determine momentum, heat, gas and liquid transfer. Thus, results track the positions and thermochemical history of individual particles in conjunction with the prevailing fluid phases' temperature and composition. It is believed that this methodology can be employed to complement experimental research by analysis of the comprehensive results, which can be extracted from it to enable AM processes optimization for parts qualification.
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; ...
2016-04-19
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov–Poisson equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xingyu; Samulyak, Roman, E-mail: roman.samulyak@stonybrook.edu; Computational Science Initiative, Brookhaven National Laboratory, Upton, NY 11973
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
Distribution of breakage events in random packings of rodlike particles.
Grof, Zdeněk; Štěpánek, František
2013-07-01
Uniaxial compaction and breakage of rodlike particle packing has been studied using a discrete element method simulation. A scaling relationship between the applied stress, the number of breakage events, and the number-mean particle length has been derived and compared with computational experiments. Based on results for a wide range of intrinsic particle strengths and initial particle lengths, it seems that a single universal relation can be used to describe the incidence of breakage events during compaction of rodlike particle layers.
The pdf approach to turbulent polydispersed two-phase flows
NASA Astrophysics Data System (ADS)
Minier, Jean-Pierre; Peirano, Eric
2001-10-01
The purpose of this paper is to develop a probabilistic approach to turbulent polydispersed two-phase flows. The two-phase flows considered are composed of a continuous phase, which is a turbulent fluid, and a dispersed phase, which represents an ensemble of discrete particles (solid particles, droplets or bubbles). Gathering the difficulties of turbulent flows and of particle motion, the challenge is to work out a general modelling approach that meets three requirements: to treat accurately the physically relevant phenomena, to provide enough information to address issues of complex physics (combustion, polydispersed particle flows, …) and to remain tractable for general non-homogeneous flows. The present probabilistic approach models the statistical dynamics of the system and consists in simulating the joint probability density function (pdf) of a number of fluid and discrete particle properties. A new point is that both the fluid and the particles are included in the pdf description. The derivation of the joint pdf model for the fluid and for the discrete particles is worked out in several steps. The mathematical properties of stochastic processes are first recalled. The various hierarchies of pdf descriptions are detailed and the physical principles that are used in the construction of the models are explained. The Lagrangian one-particle probabilistic description is developed first for the fluid alone, then for the discrete particles and finally for the joint fluid and particle turbulent systems. In the case of the probabilistic description for the fluid alone or for the discrete particles alone, numerical computations are presented and discussed to illustrate how the method works in practice and the kind of information that can be extracted from it. Comments on the current modelling state and propositions for future investigations which try to link the present work with other ideas in physics are made at the end of the paper.
Gardiner, Bruce S.; Wong, Kelvin K. L.; Joldes, Grand R.; Rich, Addison J.; Tan, Chin Wee; Burgess, Antony W.; Smith, David W.
2015-01-01
This paper presents a framework for modelling biological tissues based on discrete particles. Cell components (e.g. cell membranes, cell cytoskeleton, cell nucleus) and extracellular matrix (e.g. collagen) are represented using collections of particles. Simple particle to particle interaction laws are used to simulate and control complex physical interaction types (e.g. cell-cell adhesion via cadherins, integrin basement membrane attachment, cytoskeletal mechanical properties). Particles may be given the capacity to change their properties and behaviours in response to changes in the cellular microenvironment (e.g., in response to cell-cell signalling or mechanical loadings). Each particle is in effect an ‘agent’, meaning that the agent can sense local environmental information and respond according to pre-determined or stochastic events. The behaviour of the proposed framework is exemplified through several biological problems of ongoing interest. These examples illustrate how the modelling framework allows enormous flexibility for representing the mechanical behaviour of different tissues, and we argue this is a more intuitive approach than perhaps offered by traditional continuum methods. Because of this flexibility, we believe the discrete modelling framework provides an avenue for biologists and bioengineers to explore the behaviour of tissue systems in a computational laboratory. PMID:26452000
Gardiner, Bruce S; Wong, Kelvin K L; Joldes, Grand R; Rich, Addison J; Tan, Chin Wee; Burgess, Antony W; Smith, David W
2015-10-01
This paper presents a framework for modelling biological tissues based on discrete particles. Cell components (e.g. cell membranes, cell cytoskeleton, cell nucleus) and extracellular matrix (e.g. collagen) are represented using collections of particles. Simple particle to particle interaction laws are used to simulate and control complex physical interaction types (e.g. cell-cell adhesion via cadherins, integrin basement membrane attachment, cytoskeletal mechanical properties). Particles may be given the capacity to change their properties and behaviours in response to changes in the cellular microenvironment (e.g., in response to cell-cell signalling or mechanical loadings). Each particle is in effect an 'agent', meaning that the agent can sense local environmental information and respond according to pre-determined or stochastic events. The behaviour of the proposed framework is exemplified through several biological problems of ongoing interest. These examples illustrate how the modelling framework allows enormous flexibility for representing the mechanical behaviour of different tissues, and we argue this is a more intuitive approach than perhaps offered by traditional continuum methods. Because of this flexibility, we believe the discrete modelling framework provides an avenue for biologists and bioengineers to explore the behaviour of tissue systems in a computational laboratory.
Multiscale Simulations of Reactive Transport
NASA Astrophysics Data System (ADS)
Tartakovsky, D. M.; Bakarji, J.
2014-12-01
Discrete, particle-based simulations offer distinct advantages when modeling solute transport and chemical reactions. For example, Brownian motion is often used to model diffusion in complex pore networks, and Gillespie-type algorithms allow one to handle multicomponent chemical reactions with uncertain reaction pathways. Yet such models can be computationally more intensive than their continuum-scale counterparts, e.g., advection-dispersion-reaction equations. Combining the discrete and continuum models has a potential to resolve the quantity of interest with a required degree of physicochemical granularity at acceptable computational cost. We present computational examples of such "hybrid models" and discuss the challenges associated with coupling these two levels of description.
An incompressible two-dimensional multiphase particle-in-cell model for dense particle flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snider, D.M.; O`Rourke, P.J.; Andrews, M.J.
1997-06-01
A two-dimensional, incompressible, multiphase particle-in-cell (MP-PIC) method is presented for dense particle flows. The numerical technique solves the governing equations of the fluid phase using a continuum model and those of the particle phase using a Lagrangian model. Difficulties associated with calculating interparticle interactions for dense particle flows with volume fractions above 5% have been eliminated by mapping particle properties to a Eulerian grid and then mapping back computed stress tensors to particle positions. This approach utilizes the best of Eulerian/Eulerian continuum models and Eulerian/Lagrangian discrete models. The solution scheme allows for distributions of types, sizes, and density of particles,more » with no numerical diffusion from the Lagrangian particle calculations. The computational method is implicit with respect to pressure, velocity, and volume fraction in the continuum solution thus avoiding courant limits on computational time advancement. MP-PIC simulations are compared with one-dimensional problems that have analytical solutions and with two-dimensional problems for which there are experimental data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makedonska, Nataliia; Painter, Scott L.; Bui, Quan M.
The discrete fracture network (DFN) model is a method to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. We present a new particle tracking capability, which is adapted to control volume (Voronoi polygons) flow solutions on unstructured grids (Delaunay triangulations) on three-dimensional DFNs. The locally mass-conserving finite-volume approach eliminates massmore » balance-related problems during particle tracking. The scalar fluxes calculated for each control volume face by the flow solver are used to reconstruct a Darcy velocity at each control volume centroid. The groundwater velocities can then be continuously interpolated to any point in the domain of interest. The control volumes at fracture intersections are split into four pieces, and the velocity is reconstructed independently on each piece, which results in multiple groundwater velocities at the intersection, one for each fracture on each side of the intersection line. This technique enables detailed particle transport representation through a complex DFN structure. Verified for small DFNs, the new simulation capability enables numerical experiments on advective transport in large DFNs to be performed. As a result, we demonstrate this particle transport approach on a DFN model using parameters similar to those of crystalline rock at a proposed geologic repository for spent nuclear fuel in Forsmark, Sweden.« less
Makedonska, Nataliia; Painter, Scott L.; Bui, Quan M.; ...
2015-09-16
The discrete fracture network (DFN) model is a method to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. We present a new particle tracking capability, which is adapted to control volume (Voronoi polygons) flow solutions on unstructured grids (Delaunay triangulations) on three-dimensional DFNs. The locally mass-conserving finite-volume approach eliminates massmore » balance-related problems during particle tracking. The scalar fluxes calculated for each control volume face by the flow solver are used to reconstruct a Darcy velocity at each control volume centroid. The groundwater velocities can then be continuously interpolated to any point in the domain of interest. The control volumes at fracture intersections are split into four pieces, and the velocity is reconstructed independently on each piece, which results in multiple groundwater velocities at the intersection, one for each fracture on each side of the intersection line. This technique enables detailed particle transport representation through a complex DFN structure. Verified for small DFNs, the new simulation capability enables numerical experiments on advective transport in large DFNs to be performed. As a result, we demonstrate this particle transport approach on a DFN model using parameters similar to those of crystalline rock at a proposed geologic repository for spent nuclear fuel in Forsmark, Sweden.« less
An integral equation formulation for rigid bodies in Stokes flow in three dimensions
NASA Astrophysics Data System (ADS)
Corona, Eduardo; Greengard, Leslie; Rachh, Manas; Veerapaneni, Shravan
2017-03-01
We present a new derivation of a boundary integral equation (BIE) for simulating the three-dimensional dynamics of arbitrarily-shaped rigid particles of genus zero immersed in a Stokes fluid, on which are prescribed forces and torques. Our method is based on a single-layer representation and leads to a simple second-kind integral equation. It avoids the use of auxiliary sources within each particle that play a role in some classical formulations. We use a spectrally accurate quadrature scheme to evaluate the corresponding layer potentials, so that only a small number of spatial discretization points per particle are required. The resulting discrete sums are computed in O (n) time, where n denotes the number of particles, using the fast multipole method (FMM). The particle positions and orientations are updated by a high-order time-stepping scheme. We illustrate the accuracy, conditioning and scaling of our solvers with several numerical examples.
Ihmsen, Markus; Cornelis, Jens; Solenthaler, Barbara; Horvath, Christopher; Teschner, Matthias
2013-07-25
We propose a novel formulation of the projection method for Smoothed Particle Hydrodynamics (SPH). We combine a symmetric SPH pressure force and an SPH discretization of the continuity equation to obtain a discretized form of the pressure Poisson equation (PPE). In contrast to previous projection schemes, our system does consider the actual computation of the pressure force. This incorporation improves the convergence rate of the solver. Furthermore, we propose to compute the density deviation based on velocities instead of positions as this formulation improves the robustness of the time-integration scheme. We show that our novel formulation outperforms previous projection schemes and state-of-the-art SPH methods. Large time steps and small density deviations of down to 0.01% can be handled in typical scenarios. The practical relevance of the approach is illustrated by scenarios with up to 40 million SPH particles.
Ihmsen, Markus; Cornelis, Jens; Solenthaler, Barbara; Horvath, Christopher; Teschner, Matthias
2014-03-01
We propose a novel formulation of the projection method for Smoothed Particle Hydrodynamics (SPH). We combine a symmetric SPH pressure force and an SPH discretization of the continuity equation to obtain a discretized form of the pressure Poisson equation (PPE). In contrast to previous projection schemes, our system does consider the actual computation of the pressure force. This incorporation improves the convergence rate of the solver. Furthermore, we propose to compute the density deviation based on velocities instead of positions as this formulation improves the robustness of the time-integration scheme. We show that our novel formulation outperforms previous projection schemes and state-of-the-art SPH methods. Large time steps and small density deviations of down to 0.01 percent can be handled in typical scenarios. The practical relevance of the approach is illustrated by scenarios with up to 40 million SPH particles.
Microscopic analysis of Hopper flow with ellipsoidal particles
NASA Astrophysics Data System (ADS)
Liu, Sida; Zhou, Zongyan; Zou, Ruiping; Pinson, David; Yu, Aibing
2013-06-01
Hoppers are widely used in process industries. With such widespread application, difficulties in achieving desired operational behaviors have led to extensive experimental and mathematical studies in the past decades. Particularly, the discrete element method has become one of the most important simulation tools for design and analysis. So far, most studies are on spherical particles for computational convenience. In this work, ellipsoidal particles are used as they can represent a large variation of particle shapes. Hopper flow with ellipsoidal particles is presented highlighting the effect of particle shape on the microscopic properties.
Coupled discrete element and finite volume solution of two classical soil mechanics problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Feng; Drumm, Eric; Guiochon, Georges A
One dimensional solutions for the classic critical upward seepage gradient/quick condition and the time rate of consolidation problems are obtained using coupled routines for the finite volume method (FVM) and discrete element method (DEM), and the results compared with the analytical solutions. The two phase flow in a system composed of fluid and solid is simulated with the fluid phase modeled by solving the averaged Navier-Stokes equation using the FVM and the solid phase is modeled using the DEM. A framework is described for the coupling of two open source computer codes: YADE-OpenDEM for the discrete element method and OpenFOAMmore » for the computational fluid dynamics. The particle-fluid interaction is quantified using a semi-empirical relationship proposed by Ergun [12]. The two classical verification problems are used to explore issues encountered when using coupled flow DEM codes, namely, the appropriate time step size for both the fluid and mechanical solution processes, the choice of the viscous damping coefficient, and the number of solid particles per finite fluid volume.« less
Coherent Backscattering in the Cross-Polarized Channel
NASA Technical Reports Server (NTRS)
Mischenko, Michael I.; Mackowski, Daniel W.
2011-01-01
We analyze the asymptotic behavior of the cross-polarized enhancement factor in the framework of the standard low-packing-density theory of coherent backscattering by discrete random media composed of spherically symmetric particles. It is shown that if the particles are strongly absorbing or if the smallest optical dimension of the particulate medium (i.e., the optical thickness of a plane-parallel slab or the optical diameter of a spherically symmetric volume) approaches zero, then the cross-polarized enhancement factor tends to its upper-limit value 2. This theoretical prediction is illustrated using direct computer solutions of the Maxwell equations for spherical volumes of discrete random medium.
Coherent Backscattering by Polydisperse Discrete Random Media: Exact T-Matrix Results
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.
2011-01-01
The numerically exact superposition T-matrix method is used to compute, for the first time to our knowledge, electromagnetic scattering by finite spherical volumes composed of polydisperse mixtures of spherical particles with different size parameters or different refractive indices. The backscattering patterns calculated in the far-field zone of the polydisperse multiparticle volumes reveal unequivocally the classical manifestations of the effect of weak localization of electromagnetic waves in discrete random media, thereby corroborating the universal interference nature of coherent backscattering. The polarization opposition effect is shown to be the least robust manifestation of weak localization fading away with increasing particle size parameter.
Analysis of Streamline Separation at Infinity Using Time-Discrete Markov Chains.
Reich, W; Scheuermann, G
2012-12-01
Existing methods for analyzing separation of streamlines are often restricted to a finite time or a local area. In our paper we introduce a new method that complements them by allowing an infinite-time-evaluation of steady planar vector fields. Our algorithm unifies combinatorial and probabilistic methods and introduces the concept of separation in time-discrete Markov-Chains. We compute particle distributions instead of the streamlines of single particles. We encode the flow into a map and then into a transition matrix for each time direction. Finally, we compare the results of our grid-independent algorithm to the popular Finite-Time-Lyapunov-Exponents and discuss the discrepancies.
RIP-REMOTE INTERACTIVE PARTICLE-TRACER
NASA Technical Reports Server (NTRS)
Rogers, S. E.
1994-01-01
Remote Interactive Particle-tracing (RIP) is a distributed-graphics program which computes particle traces for computational fluid dynamics (CFD) solution data sets. A particle trace is a line which shows the path a massless particle in a fluid will take; it is a visual image of where the fluid is going. The program is able to compute and display particle traces at a speed of about one trace per second because it runs on two machines concurrently. The data used by the program is contained in two files. The solution file contains data on density, momentum and energy quantities of a flow field at discrete points in three-dimensional space, while the grid file contains the physical coordinates of each of the discrete points. RIP requires two computers. A local graphics workstation interfaces with the user for program control and graphics manipulation, and a remote machine interfaces with the solution data set and performs time-intensive computations. The program utilizes two machines in a distributed mode for two reasons. First, the data to be used by the program is usually generated on the supercomputer. RIP avoids having to convert and transfer the data, eliminating any memory limitations of the local machine. Second, as computing the particle traces can be computationally expensive, RIP utilizes the power of the supercomputer for this task. Although the remote site code was developed on a CRAY, it is possible to port this to any supercomputer class machine with a UNIX-like operating system. Integration of a velocity field from a starting physical location produces the particle trace. The remote machine computes the particle traces using the particle-tracing subroutines from PLOT3D/AMES, a CFD post-processing graphics program available from COSMIC (ARC-12779). These routines use a second-order predictor-corrector method to integrate the velocity field. Then the remote program sends graphics tokens to the local machine via a remote-graphics library. The local machine interprets the graphics tokens and draws the particle traces. The program is menu driven. RIP is implemented on the silicon graphics IRIS 3000 (local workstation) with an IRIX operating system and on the CRAY2 (remote station) with a UNICOS 1.0 or 2.0 operating system. The IRIS 4D can be used in place of the IRIS 3000. The program is written in C (67%) and FORTRAN 77 (43%) and has an IRIS memory requirement of 4 MB. The remote and local stations must use the same user ID. PLOT3D/AMES unformatted data sets are required for the remote machine. The program was developed in 1988.
NASA Astrophysics Data System (ADS)
Furuichi, Mikito; Nishiura, Daisuke
2017-10-01
We developed dynamic load-balancing algorithms for Particle Simulation Methods (PSM) involving short-range interactions, such as Smoothed Particle Hydrodynamics (SPH), Moving Particle Semi-implicit method (MPS), and Discrete Element method (DEM). These are needed to handle billions of particles modeled in large distributed-memory computer systems. Our method utilizes flexible orthogonal domain decomposition, allowing the sub-domain boundaries in the column to be different for each row. The imbalances in the execution time between parallel logical processes are treated as a nonlinear residual. Load-balancing is achieved by minimizing the residual within the framework of an iterative nonlinear solver, combined with a multigrid technique in the local smoother. Our iterative method is suitable for adjusting the sub-domain frequently by monitoring the performance of each computational process because it is computationally cheaper in terms of communication and memory costs than non-iterative methods. Numerical tests demonstrated the ability of our approach to handle workload imbalances arising from a non-uniform particle distribution, differences in particle types, or heterogeneous computer architecture which was difficult with previously proposed methods. We analyzed the parallel efficiency and scalability of our method using Earth simulator and K-computer supercomputer systems.
NASA Technical Reports Server (NTRS)
Dlugach, Janna M.; Mishchenko, Michael I.; Liu, Li; Mackowski, Daniel W.
2011-01-01
Direct computer simulations of electromagnetic scattering by discrete random media have become an active area of research. In this progress review, we summarize and analyze our main results obtained by means of numerically exact computer solutions of the macroscopic Maxwell equations. We consider finite scattering volumes with size parameters in the range, composed of varying numbers of randomly distributed particles with different refractive indices. The main objective of our analysis is to examine whether all backscattering effects predicted by the low-density theory of coherent backscattering (CB) also take place in the case of densely packed media. Based on our extensive numerical data we arrive at the following conclusions: (i) all backscattering effects predicted by the asymptotic theory of CB can also take place in the case of densely packed media; (ii) in the case of very large particle packing density, scattering characteristics of discrete random media can exhibit behavior not predicted by the low-density theories of CB and radiative transfer; (iii) increasing the absorptivity of the constituent particles can either enhance or suppress typical manifestations of CB depending on the particle packing density and the real part of the refractive index. Our numerical data strongly suggest that spectacular backscattering effects identified in laboratory experiments and observed for a class of high-albedo Solar System objects are caused by CB.
Berti, Claudio; Gillespie, Dirk; Bardhan, Jaydeep P; Eisenberg, Robert S; Fiegna, Claudio
2012-07-01
Particle-based simulation represents a powerful approach to modeling physical systems in electronics, molecular biology, and chemical physics. Accounting for the interactions occurring among charged particles requires an accurate and efficient solution of Poisson's equation. For a system of discrete charges with inhomogeneous dielectrics, i.e., a system with discontinuities in the permittivity, the boundary element method (BEM) is frequently adopted. It provides the solution of Poisson's equation, accounting for polarization effects due to the discontinuity in the permittivity by computing the induced charges at the dielectric boundaries. In this framework, the total electrostatic potential is then found by superimposing the elemental contributions from both source and induced charges. In this paper, we present a comparison between two BEMs to solve a boundary-integral formulation of Poisson's equation, with emphasis on the BEMs' suitability for particle-based simulations in terms of solution accuracy and computation speed. The two approaches are the collocation and qualocation methods. Collocation is implemented following the induced-charge computation method of D. Boda et al. [J. Chem. Phys. 125, 034901 (2006)]. The qualocation method is described by J. Tausch et al. [IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 20, 1398 (2001)]. These approaches are studied using both flat and curved surface elements to discretize the dielectric boundary, using two challenging test cases: a dielectric sphere embedded in a different dielectric medium and a toy model of an ion channel. Earlier comparisons of the two BEM approaches did not address curved surface elements or semiatomistic models of ion channels. Our results support the earlier findings that for flat-element calculations, qualocation is always significantly more accurate than collocation. On the other hand, when the dielectric boundary is discretized with curved surface elements, the two methods are essentially equivalent; i.e., they have comparable accuracies for the same number of elements. We find that ions in water--charges embedded in a high-dielectric medium--are harder to compute accurately than charges in a low-dielectric medium.
APC: A New Code for Atmospheric Polarization Computations
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.
2014-01-01
A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.
NASA Astrophysics Data System (ADS)
Fishkova, T. Ya.
2017-06-01
Using computer simulation, I have determined the parameters of a multichannel analyzer of charged particles of a simple design that I have proposed having the form of a cylindrical capacitor with a discrete outer cylinder and closed ends in a wide range of simultaneously recorded energies ( E max/ E min = 100). When introducing an additional cylindrical electrode of small dimensions near the front end of the system, it is possible to improve the resolution by more than an order of magnitude in the low-energy region. At the same time, the energy resolution of the analyzer in all the above energy range is ρ = (4-6) × 10-3.
Fast Particle Methods for Multiscale Phenomena Simulations
NASA Technical Reports Server (NTRS)
Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew
2000-01-01
We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.
Computational plasticity algorithm for particle dynamics simulations
NASA Astrophysics Data System (ADS)
Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.
2018-01-01
The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.
Integrable Floquet dynamics, generalized exclusion processes and "fused" matrix ansatz
NASA Astrophysics Data System (ADS)
Vanicat, Matthieu
2018-04-01
We present a general method for constructing integrable stochastic processes, with two-step discrete time Floquet dynamics, from the transfer matrix formalism. The models can be interpreted as a discrete time parallel update. The method can be applied for both periodic and open boundary conditions. We also show how the stationary distribution can be built as a matrix product state. As an illustration we construct parallel discrete time dynamics associated with the R-matrix of the SSEP and of the ASEP, and provide the associated stationary distributions in a matrix product form. We use this general framework to introduce new integrable generalized exclusion processes, where a fixed number of particles is allowed on each lattice site in opposition to the (single particle) exclusion process models. They are constructed using the fusion procedure of R-matrices (and K-matrices for open boundary conditions) for the SSEP and ASEP. We develop a new method, that we named "fused" matrix ansatz, to build explicitly the stationary distribution in a matrix product form. We use this algebraic structure to compute physical observables such as the correlation functions and the mean particle current.
An Electron is the God Particle
NASA Astrophysics Data System (ADS)
Wolff, Milo
2001-04-01
Philosophers, Clifford, Mach, Einstein, Wyle, Dirac & Schroedinger, believed that only a wave structure of particles could satisfy experiment and fulfill reality. A quantum Wave Structure of Matter is described here. It predicts the natural laws more accurately and completely than classic laws. Einstein reasoned that the universe depends on particles which are "spherically, spatially extended in space." and "Hence a discrete material particle has no place as a fundamental concept in a field theory." Thus the discrete point particle was wrong. He deduced the true electron is primal because its force range is infinite. Now, it is found the electron's wave structure contains the laws of Nature that rule the universe. The electron plays the role of creator - the God particle. Electron structure is a pair of spherical outward/inward quantum waves, convergent to a center in 3D space. This wave pair creates a h/4pi quantum spin when the in-wave spherically rotates to become the out-wave. Both waves form a spinor satisfying the Dirac Equation. Thus, the universe is binary like a computer. Reference: http://members.tripod.com/mwolff
Discrete Particle Model for Porous Media Flow using OpenFOAM at Intel Xeon Phi Coprocessors
NASA Astrophysics Data System (ADS)
Shang, Zhi; Nandakumar, Krishnaswamy; Liu, Honggao; Tyagi, Mayank; Lupo, James A.; Thompson, Karten
2015-11-01
The discrete particle model (DPM) in OpenFOAM was used to study the turbulent solid particle suspension flows through the porous media of a natural dual-permeability rock. The 2D and 3D pore geometries of the porous media were generated by sphere packing with the radius ratio of 3. The porosity is about 38% same as the natural dual-permeability rock. In the 2D case, the mesh cells reach 5 million with 1 million solid particles and in the 3D case, the mesh cells are above 10 million with 5 million solid particles. The solid particles are distributed by Gaussian distribution from 20 μm to 180 μm with expectation as 100 μm. Through the numerical simulations, not only was the HPC studied using Intel Xeon Phi Coprocessors but also the flow behaviors of large scale solid suspension flows in porous media were studied. The authors would like to thank the support by IPCC@LSU-Intel Parallel Computing Center (LSU # Y1SY1-1) and the HPC resources at Louisiana State University (http://www.hpc.lsu.edu).
Advances in the simulation and automated measurement of well-sorted granular material: 1. Simulation
Daniel Buscombe,; Rubin, David M.
2012-01-01
1. In this, the first of a pair of papers which address the simulation and automated measurement of well-sorted natural granular material, a method is presented for simulation of two-phase (solid, void) assemblages of discrete non-cohesive particles. The purpose is to have a flexible, yet computationally and theoretically simple, suite of tools with well constrained and well known statistical properties, in order to simulate realistic granular material as a discrete element model with realistic size and shape distributions, for a variety of purposes. The stochastic modeling framework is based on three-dimensional tessellations with variable degrees of order in particle-packing arrangement. Examples of sediments with a variety of particle size distributions and spatial variability in grain size are presented. The relationship between particle shape and porosity conforms to published data. The immediate application is testing new algorithms for automated measurements of particle properties (mean and standard deviation of particle sizes, and apparent porosity) from images of natural sediment, as detailed in the second of this pair of papers. The model could also prove useful for simulating specific depositional structures found in natural sediments, the result of physical alterations to packing and grain fabric, using discrete particle flow models. While the principal focus here is on naturally occurring sediment and sedimentary rock, the methods presented might also be useful for simulations of similar granular or cellular material encountered in engineering, industrial and life sciences.
Development of soft-sphere contact models for thermal heat conduction in granular flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, A. B.; Pannala, S.; Ma, Z.
2016-06-08
Conductive heat transfer to flowing particles occurs when two particles (or a particle and wall) come into contact. The direct conduction between the two bodies depends on the collision dynamics, namely the size of the contact area and the duration of contact. For soft-sphere discrete-particle simulations, it is computationally expensive to resolve the true collision time because doing so would require a restrictively small numerical time step. To improve the computational speed, it is common to increase the 'softness' of the material to artificially increase the collision time, but doing so affects the heat transfer. In this work, two physically-basedmore » correction terms are derived to compensate for the increased contact area and time stemming from artificial particle softening. By including both correction terms, the impact that artificial softening has on the conductive heat transfer is removed, thus enabling simulations at greatly reduced computational times without sacrificing physical accuracy.« less
Kojic, Milos; Filipovic, Nenad; Tsuda, Akira
2012-01-01
A multiscale procedure to couple a mesoscale discrete particle model and a macroscale continuum model of incompressible fluid flow is proposed in this study. We call this procedure the mesoscopic bridging scale (MBS) method since it is developed on the basis of the bridging scale method for coupling molecular dynamics and finite element models [G.J. Wagner, W.K. Liu, Coupling of atomistic and continuum simulations using a bridging scale decomposition, J. Comput. Phys. 190 (2003) 249–274]. We derive the governing equations of the MBS method and show that the differential equations of motion of the mesoscale discrete particle model and finite element (FE) model are only coupled through the force terms. Based on this coupling, we express the finite element equations which rely on the Navier–Stokes and continuity equations, in a way that the internal nodal FE forces are evaluated using viscous stresses from the mesoscale model. The dissipative particle dynamics (DPD) method for the discrete particle mesoscale model is employed. The entire fluid domain is divided into a local domain and a global domain. Fluid flow in the local domain is modeled with both DPD and FE method, while fluid flow in the global domain is modeled by the FE method only. The MBS method is suitable for modeling complex (colloidal) fluid flows, where continuum methods are sufficiently accurate only in the large fluid domain, while small, local regions of particular interest require detailed modeling by mesoscopic discrete particles. Solved examples – simple Poiseuille and driven cavity flows illustrate the applicability of the proposed MBS method. PMID:23814322
RT DDA: A hybrid method for predicting the scattering properties by densely packed media
NASA Astrophysics Data System (ADS)
Ramezan Pour, B.; Mackowski, D.
2017-12-01
The most accurate approaches to predicting the scattering properties of particulate media are based on exact solutions of the Maxwell's equations (MEs), such as the T-matrix and discrete dipole methods. Applying these techniques for optically thick targets is challenging problem due to the large-scale computations and are usually substituted by phenomenological radiative transfer (RT) methods. On the other hand, the RT technique is of questionable validity in media with large particle packing densities. In recent works, we used numerically exact ME solvers to examine the effects of particle concentration on the polarized reflection properties of plane parallel random media. The simulations were performed for plane parallel layers of wavelength-sized spherical particles, and results were compared with RT predictions. We have shown that RTE results monotonically converge to the exact solution as the particle volume fraction becomes smaller and one can observe a nearly perfect fit for packing densities of 2%-5%. This study describes the hybrid technique composed of exact and numerical scalar RT methods. The exact methodology in this work is the plane parallel discrete dipole approximation whereas the numerical method is based on the adding and doubling method. This approach not only decreases the computational time owing to the RT method but also includes the interference and multiple scattering effects, so it may be applicable to large particle density conditions.
Comprehensive Thematic T-Matrix Reference Database: A 2014-2015 Update
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Zakharova, Nadezhda; Khlebtsov, Nikolai G.; Videen, Gorden; Wriedt, Thomas
2015-01-01
The T-matrix method is one of the most versatile and efficient direct computer solvers of the macroscopic Maxwell equations and is widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper is the seventh update to the comprehensive thematic database of peer-reviewed T-matrix publications initiated by us in 2004 and includes relevant publications that have appeared since 2013. It also lists a number of earlier publications overlooked previously.
Kahnert, Michael; Nousiainen, Timo; Lindqvist, Hannakaisa; Ebert, Martin
2012-04-23
Light scattering by light absorbing carbon (LAC) aggregates encapsulated into sulfate shells is computed by use of the discrete dipole method. Computations are performed for a UV, visible, and IR wavelength, different particle sizes, and volume fractions. Reference computations are compared to three classes of simplified model particles that have been proposed for climate modeling purposes. Neither model matches the reference results sufficiently well. Remarkably, more realistic core-shell geometries fall behind homogeneous mixture models. An extended model based on a core-shell-shell geometry is proposed and tested. Good agreement is found for total optical cross sections and the asymmetry parameter. © 2012 Optical Society of America
NASA Technical Reports Server (NTRS)
Dlugach, Janna M.; Mishchenko, Michael I.
2017-01-01
In this paper, we discuss some aspects of numerical modeling of electromagnetic scattering by discrete random medium by using numerically exact solutions of the macroscopic Maxwell equations. Typical examples of such media are clouds of interstellar dust, clouds of interplanetary dust in the Solar system, dusty atmospheres of comets, particulate planetary rings, clouds in planetary atmospheres, aerosol particles with numerous inclusions and so on. Our study is based on the results of extensive computations of different characteristics of electromagnetic scattering obtained by using the superposition T-matrix method which represents a direct computer solver of the macroscopic Maxwell equations for an arbitrary multisphere configuration. As a result, in particular, we clarify the range of applicability of the low-density theories of radiative transfer and coherent backscattering as well as of widely used effective-medium approximations.
SPAMCART: a code for smoothed particle Monte Carlo radiative transfer
NASA Astrophysics Data System (ADS)
Lomax, O.; Whitworth, A. P.
2016-10-01
We present a code for generating synthetic spectral energy distributions and intensity maps from smoothed particle hydrodynamics simulation snapshots. The code is based on the Lucy Monte Carlo radiative transfer method, I.e. it follows discrete luminosity packets as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The sources can be extended and/or embedded, and discrete and/or diffuse. The density is not mapped on to a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Secondly, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.
Remote creation of hybrid entanglement between particle-like and wave-like optical qubits
NASA Astrophysics Data System (ADS)
Morin, Olivier; Huang, Kun; Liu, Jianli; Le Jeannic, Hanna; Fabre, Claude; Laurat, Julien
2014-07-01
The wave-particle duality of light has led to two different encodings for optical quantum information processing. Several approaches have emerged based either on particle-like discrete-variable states (that is, finite-dimensional quantum systems) or on wave-like continuous-variable states (that is, infinite-dimensional systems). Here, we demonstrate the generation of entanglement between optical qubits of these different types, located at distant places and connected by a lossy channel. Such hybrid entanglement, which is a key resource for a variety of recently proposed schemes, including quantum cryptography and computing, enables information to be converted from one Hilbert space to the other via teleportation and therefore the connection of remote quantum processors based upon different encodings. Beyond its fundamental significance for the exploration of entanglement and its possible instantiations, our optical circuit holds promise for implementations of heterogeneous network, where discrete- and continuous-variable operations and techniques can be efficiently combined.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.
2016-01-01
A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell's equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell- Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell-Lorentz equations, we trace the development of the first principles formalism enabling accurate calculations of monochromatic and quasi-monochromatic scattering by static and randomly varying multiparticle groups. We illustrate how this general framework can be coupled with state-of-the-art computer solvers of the Maxwell equations and applied to direct modeling of electromagnetic scattering by representative random multi-particle groups with arbitrary packing densities. This first-principles modeling yields general physical insights unavailable with phenomenological approaches. We discuss how the first-order-scattering approximation, the radiative transfer theory, and the theory of weak localization of electromagnetic waves can be derived as immediate corollaries of the Maxwell equations for very specific and well-defined kinds of particulate medium. These recent developments confirm the mesoscopic origin of the radiative transfer, weak localization, and effective-medium regimes and help evaluate the numerical accuracy of widely used approximate modeling methodologies.
Mishchenko, Michael I; Dlugach, Janna M; Yurkin, Maxim A; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R Lee; Travis, Larry D; Yang, Ping; Zakharova, Nadezhda T
2016-05-16
A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ , or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell's equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell-Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell-Lorentz equations, we trace the development of the first-principles formalism enabling accurate calculations of monochromatic and quasi-monochromatic scattering by static and randomly varying multiparticle groups. We illustrate how this general framework can be coupled with state-of-the-art computer solvers of the Maxwell equations and applied to direct modeling of electromagnetic scattering by representative random multi-particle groups with arbitrary packing densities. This first-principles modeling yields general physical insights unavailable with phenomenological approaches. We discuss how the first-order-scattering approximation, the radiative transfer theory, and the theory of weak localization of electromagnetic waves can be derived as immediate corollaries of the Maxwell equations for very specific and well-defined kinds of particulate medium. These recent developments confirm the mesoscopic origin of the radiative transfer, weak localization, and effective-medium regimes and help evaluate the numerical accuracy of widely used approximate modeling methodologies.
Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.
2018-01-01
A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell’s equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell–Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell–Lorentz equations, we trace the development of the first-principles formalism enabling accurate calculations of monochromatic and quasi-monochromatic scattering by static and randomly varying multiparticle groups. We illustrate how this general framework can be coupled with state-of-the-art computer solvers of the Maxwell equations and applied to direct modeling of electromagnetic scattering by representative random multi-particle groups with arbitrary packing densities. This first-principles modeling yields general physical insights unavailable with phenomenological approaches. We discuss how the first-order-scattering approximation, the radiative transfer theory, and the theory of weak localization of electromagnetic waves can be derived as immediate corollaries of the Maxwell equations for very specific and well-defined kinds of particulate medium. These recent developments confirm the mesoscopic origin of the radiative transfer, weak localization, and effective-medium regimes and help evaluate the numerical accuracy of widely used approximate modeling methodologies. PMID:29657355
2016-06-12
Particle Size in Discrete Element Method to Particle Gas Method (DEM_PGM) Coupling in Underbody Blast Simulations Venkatesh Babu, Kumar Kulkarni, Sanjay...buried in soil viz., (1) coupled discrete element & particle gas methods (DEM-PGM) and (2) Arbitrary Lagrangian-Eulerian (ALE), are investigated. The...DEM_PGM and identify the limitations/strengths compared to the ALE method. Discrete Element Method (DEM) can model individual particle directly, and
An updated Lagrangian particle hydrodynamics (ULPH) for Newtonian fluids
NASA Astrophysics Data System (ADS)
Tu, Qingsong; Li, Shaofan
2017-11-01
In this work, we have developed an updated Lagrangian particle hydrodynamics (ULPH) for Newtonian fluid. Unlike the smoothed particle hydrodynamics, the non-local particle hydrodynamics formulation proposed here is consistent and convergence. Unlike the state-based peridynamics, the discrete particle dynamics proposed here has no internal material bond between particles, and it is not formulated with respect to initial or a fixed referential configuration. In specific, we have shown that (1) the non-local update Lagrangian particle hydrodynamics formulation converges to the conventional local fluid mechanics formulation; (2) the non-local updated Lagrangian particle hydrodynamics can capture arbitrary flow discontinuities without any changes in the formulation, and (3) the proposed non-local particle hydrodynamics is computationally efficient and robust.
Discrete Particle Method for Simulating Hypervelocity Impact Phenomena.
Watson, Erkai; Steinhauser, Martin O
2017-04-02
In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI) phenomena which is based on the Discrete Element Method (DEM). Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms -1 . We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy-conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength.
NASA Astrophysics Data System (ADS)
Leigh, Nathan W. C.; Wegsman, Shalma
2018-05-01
We present a formalism for constructing schematic diagrams to depict chaotic three-body interactions in Newtonian gravity. This is done by decomposing each interaction into a series of discrete transformations in energy- and angular momentum-space. Each time a transformation is applied, the system changes state as the particles re-distribute their energy and angular momenta. These diagrams have the virtue of containing all of the quantitative information needed to fully characterize most bound or unbound interactions through time and space, including the total duration of the interaction, the initial and final stable states in addition to every intervening temporary meta-stable state. As shown via an illustrative example for the bound case, prolonged excursions of one of the particles, which by far dominates the computational cost of the simulations, are reduced to a single discrete transformation in energy- and angular momentum-space, thereby potentially mitigating any computational expense. We further generalize our formalism to sequences of (unbound) three-body interactions, as occur in dense stellar environments during binary hardening. Finally, we provide a method for dynamically evolving entire populations of binaries via three-body scattering interactions, using a purely analytic formalism. In principle, the techniques presented here are adaptable to other three-body problems that conserve energy and angular momentum.
Discrete Particle Method for Simulating Hypervelocity Impact Phenomena
Watson, Erkai; Steinhauser, Martin O.
2017-01-01
In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI) phenomena which is based on the Discrete Element Method (DEM). Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms−1. We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy–conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength. PMID:28772739
Comprehensive T-Matrix Reference Database: A 2007-2009 Update
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Zakharova, Nadia T.; Videen, Gorden; Khlebtsov, Nikolai G.; Wriedt, Thomas
2010-01-01
The T-matrix method is among the most versatile, efficient, and widely used theoretical techniques for the numerically exact computation of electromagnetic scattering by homogeneous and composite particles, clusters of particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper presents an update to the comprehensive database of T-matrix publications compiled by us previously and includes the publications that appeared since 2007. It also lists several earlier publications not included in the original database.
Computational study of heat transfer in gas fluidization
NASA Astrophysics Data System (ADS)
Hou, Q. F.; Zhou, Z. Y.; Yu, A. B.
2013-06-01
Heat transfer in gas fluidization is investigated at a particle scale by means of a combined discrete element method and computational fluid dynamicsapproach. To develop understanding of heat transfer at various conditions, the effects of a few important material properties such as particle size, the Hamaker constant and particle thermal conductivity are examined through controlled numerical experiments. It is found that the convective heat transfer is dominant, and radiative heat transfer becomes important when the temperature is high. Conductive heat transfer also plays a role depending on the flow regimes and material properties. The heat transfer between a fluidized bed and an immersed surface is enhanced by the increase of particle thermal conductivity while it is little affected by Young's modulus. The findings should be useful for better understanding and predicting the heat transfer in gas fluidization.
Lin, Na; Chen, Hanning; Jing, Shikai; Liu, Fang; Liang, Xiaodan
2017-03-01
In recent years, symbiosis as a rich source of potential engineering applications and computational model has attracted more and more attentions in the adaptive complex systems and evolution computing domains. Inspired by different symbiotic coevolution forms in nature, this paper proposed a series of multi-swarm particle swarm optimizers called PS 2 Os, which extend the single population particle swarm optimization (PSO) algorithm to interacting multi-swarms model by constructing hierarchical interaction topologies and enhanced dynamical update equations. According to different symbiotic interrelationships, four versions of PS 2 O are initiated to mimic mutualism, commensalism, predation, and competition mechanism, respectively. In the experiments, with five benchmark problems, the proposed algorithms are proved to have considerable potential for solving complex optimization problems. The coevolutionary dynamics of symbiotic species in each PS 2 O version are also studied respectively to demonstrate the heterogeneity of different symbiotic interrelationships that effect on the algorithm's performance. Then PS 2 O is used for solving the radio frequency identification (RFID) network planning (RNP) problem with a mixture of discrete and continuous variables. Simulation results show that the proposed algorithm outperforms the reference algorithms for planning RFID networks, in terms of optimization accuracy and computation robustness.
GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method
NASA Astrophysics Data System (ADS)
Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu
2011-07-01
Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.
A Hybrid Method for Accelerated Simulation of Coulomb Collisions in a Plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caflisch, R; Wang, C; Dimarco, G
2007-10-09
If the collisional time scale for Coulomb collisions is comparable to the characteristic time scales for a plasma, then simulation of Coulomb collisions may be important for computation of kinetic plasma dynamics. This can be a computational bottleneck because of the large number of simulated particles and collisions (or phase-space resolution requirements in continuum algorithms), as well as the wide range of collision rates over the velocity distribution function. This paper considers Monte Carlo simulation of Coulomb collisions using the binary collision models of Takizuka & Abe and Nanbu. It presents a hybrid method for accelerating the computation of Coulombmore » collisions. The hybrid method represents the velocity distribution function as a combination of a thermal component (a Maxwellian distribution) and a kinetic component (a set of discrete particles). Collisions between particles from the thermal component preserve the Maxwellian; collisions between particles from the kinetic component are performed using the method of or Nanbu. Collisions between the kinetic and thermal components are performed by sampling a particle from the thermal component and selecting a particle from the kinetic component. Particles are also transferred between the two components according to thermalization and dethermalization probabilities, which are functions of phase space.« less
Comprehensive Thematic T-Matrix Reference Database: A 2015-2017 Update
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Zakharova, Nadezhda; Khlebtsov, Nikolai G.; Videen, Gorden; Wriedt, Thomas
2017-01-01
The T-matrix method pioneered by Peter C. Waterman is one of the most versatile and efficient numerically exact computer solvers of the time-harmonic macroscopic Maxwell equations. It is widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, periodic structures (including metamaterials), and particles in the vicinity of plane or rough interfaces separating media with different refractive indices. This paper is the eighth update to the comprehensive thematic database of peer-reviewed T-matrix publications initiated in 2004 and lists relevant publications that have appeared since 2015. It also references a small number of earlier publications overlooked previously.
Comprehensive thematic T-matrix reference database: A 2015-2017 update
NASA Astrophysics Data System (ADS)
Mishchenko, Michael I.; Zakharova, Nadezhda T.; Khlebtsov, Nikolai G.; Videen, Gorden; Wriedt, Thomas
2017-11-01
The T-matrix method pioneered by Peter C. Waterman is one of the most versatile and efficient numerically exact computer solvers of the time-harmonic macroscopic Maxwell equations. It is widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, periodic structures (including metamaterials), and particles in the vicinity of plane or rough interfaces separating media with different refractive indices. This paper is the eighth update to the comprehensive thematic database of peer-reviewed T-matrix publications initiated in 2004 and lists relevant publications that have appeared since 2015. It also references a small number of earlier publications overlooked previously.
Computational study of the effect of gradient magnetic field in navigation of spherical particles
NASA Astrophysics Data System (ADS)
Karvelas, E. G.; Lampropoulos, N. K.; Papadimitriou, D. I.; Karakasidis, T. E.; Sarris, I. E.
2017-11-01
The use of spherical magnetic nanoparticles that are coated with drugs and can be navigated in arteries to attack tumors is proposed as an alternative to chemotherapy. Navigation of particles is due to magnetic field gradients that may be produced in an MRI device. In the present work, a computational study for the evaluation of the magnitude of the gradient magnetic field for particles navigation in Y bifurcations is presented. For this purpose, the presented method solves for the fluid flow and includes all the important forces that act on the particles in their discrete motion. The method is based on an iteration algorithm that adjusts the gradient magnetic field to minimize the particles’ deviation from a desired trajectory. Using the above mentioned method, the appropriate range of the gradient magnetic field for optimum navigation of nanoparticles’s aggregation is found.
Comprehensive T-matrix Reference Database: A 2009-2011 Update
NASA Technical Reports Server (NTRS)
Zakharova, Nadezhda T.; Videen, G.; Khlebtsov, Nikolai G.
2012-01-01
The T-matrix method is one of the most versatile and efficient theoretical techniques widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper presents an update to the comprehensive database of peer-reviewed T-matrix publications compiled by us previously and includes the publications that appeared since 2009. It also lists several earlier publications not included in the original database.
Numerical sedimentation particle-size analysis using the Discrete Element Method
NASA Astrophysics Data System (ADS)
Bravo, R.; Pérez-Aparicio, J. L.; Gómez-Hernández, J. J.
2015-12-01
Sedimentation tests are widely used to determine the particle size distribution of a granular sample. In this work, the Discrete Element Method interacts with the simulation of flow using the well known one-way-coupling method, a computationally affordable approach for the time-consuming numerical simulation of the hydrometer, buoyancy and pipette sedimentation tests. These tests are used in the laboratory to determine the particle-size distribution of fine-grained aggregates. Five samples with different particle-size distributions are modeled by about six million rigid spheres projected on two-dimensions, with diameters ranging from 2.5 ×10-6 m to 70 ×10-6 m, forming a water suspension in a sedimentation cylinder. DEM simulates the particle's movement considering laminar flow interactions of buoyant, drag and lubrication forces. The simulation provides the temporal/spatial distributions of densities and concentrations of the suspension. The numerical simulations cannot replace the laboratory tests since they need the final granulometry as initial data, but, as the results show, these simulations can identify the strong and weak points of each method and eventually recommend useful variations and draw conclusions on their validity, aspects very difficult to achieve in the laboratory.
An efficient and reliable predictive method for fluidized bed simulation
Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen
2017-06-13
In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less
An efficient and reliable predictive method for fluidized bed simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen
2017-06-29
In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less
Green's function enriched Poisson solver for electrostatics in many-particle systems
NASA Astrophysics Data System (ADS)
Sutmann, Godehard
2016-06-01
A highly accurate method is presented for the construction of the charge density for the solution of the Poisson equation in particle simulations. The method is based on an operator adjusted source term which can be shown to produce exact results up to numerical precision in the case of a large support of the charge distribution, therefore compensating the discretization error of finite difference schemes. This is achieved by balancing an exact representation of the known Green's function of regularized electrostatic problem with a discretized representation of the Laplace operator. It is shown that the exact calculation of the potential is possible independent of the order of the finite difference scheme but the computational efficiency for higher order methods is found to be superior due to a faster convergence to the exact result as a function of the charge support.
Boundary integral equation analysis for suspension of spheres in Stokes flow
NASA Astrophysics Data System (ADS)
Corona, Eduardo; Veerapaneni, Shravan
2018-06-01
We show that the standard boundary integral operators, defined on the unit sphere, for the Stokes equations diagonalize on a specific set of vector spherical harmonics and provide formulas for their spectra. We also derive analytical expressions for evaluating the operators away from the boundary. When two particle are located close to each other, we use a truncated series expansion to compute the hydrodynamic interaction. On the other hand, we use the standard spectrally accurate quadrature scheme to evaluate smooth integrals on the far-field, and accelerate the resulting discrete sums using the fast multipole method (FMM). We employ this discretization scheme to analyze several boundary integral formulations of interest including those arising in porous media flow, active matter and magneto-hydrodynamics of rigid particles. We provide numerical results verifying the accuracy and scaling of their evaluation.
NASA Astrophysics Data System (ADS)
Dhamale, G. D.; Tak, A. K.; Mathe, V. L.; Ghorui, S.
2018-06-01
Synthesis of yttria (Y2O3) nanoparticles in an atmospheric pressure radiofrequency inductively coupled thermal plasma (RF-ICTP) reactor has been investigated using the discrete-sectional (DS) model of particle nucleation and growth with argon as the plasma gas. Thermal and fluid dynamic information necessary for the investigation have been extracted through rigorous computational fluid dynamic (CFD) study of the system with coupled electromagnetic equations under the extended field approach. The theoretical framework has been benchmarked against published data first, and then applied to investigate the nucleation and growth process of yttrium oxide nanoparticles in the plasma reactor using the discrete-sectional (DS) model. While a variety of nucleation and growth mechanisms are suggested in literature, the study finds that the theory of homogeneous nucleation fits well with the features observed experimentally. Significant influences of the feed rate and quench rate on the distribution of particles sizes are observed. Theoretically obtained size distribution of the particles agrees well with that observed in the experiment. Different thermo-fluid dynamic environments with varied quench rates, encountered by the propagating vapor front inside the reactor under different operating conditions are found to be primarily responsible for variations in the width of the size distribution.
Shao, Xiongjun; Lynd, Lee; Wyman, Charles; Bakker, André
2009-01-01
The model of South et al. [South et al. (1995) Enzyme Microb Technol 17(9): 797-803] for simultaneous saccharification of fermentation of cellulosic biomass is extended and modified to accommodate intermittent feeding of substrate and enzyme, cascade reactor configurations, and to be more computationally efficient. A dynamic enzyme adsorption model is found to be much more computationally efficient than the equilibrium model used previously, thus increasing the feasibility of incorporating the kinetic model in a computational fluid dynamic framework in the future. For continuous or discretely fed reactors, it is necessary to use particle conversion in conversion-dependent hydrolysis rate laws rather than reactor conversion. Whereas reactor conversion decreases due to both reaction and exit of particles from the reactor, particle conversion decreases due to reaction only. Using the modified models, it is predicted that cellulose conversion increases with decreasing feeding frequency (feedings per residence time, f). A computationally efficient strategy for modeling cascade reactors involving a modified rate constant is shown to give equivalent results relative to an exhaustive approach considering the distribution of particles in each successive fermenter.
CFD-DEM Onset of Motion Analysis for Application to Bed Scour Risk Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sitek, M. A.; Lottes, S. A.
This CFD study with DEM was done as a part of the Federal Highway Administration’s (FHWA’s) effort to improve scour design procedures. The Computational Fluid Dynamics-Discrete Element Method (CFD-DEM) model, available in CD-Adapco’s StarCCM+ software, was used to simulate multiphase systems, mainly those which combine fluids and solids. In this method the motion of discrete solids is accounted for by DEM, which applies Newton's laws of motion to every particle. The flow of the fluid is determined by the local averaged Navier–Stokes equations that can be solved using the traditional CFD approach. The interactions between the fluid phase and solidsmore » phase are modeled by use of Newton's third law. The inter-particle contact forces are included in the equations of motion. Soft-particle formulation is used, which allows particles to overlap. In this study DEM was used to model separate sediment grains and spherical particles laying on the bed with the aim to analyze their movement due to flow conditions. Critical shear stress causing the incipient movement of the sediment was established and compared to the available experimental data. An example of scour around a cylindrical pier is considered. Various depths of the scoured bed and flow conditions were taken into account to gain a better understanding of the erosion forces existing around bridge foundations. The decay of these forces with increasing scour depth was quantified with a ‘decay function’, which shows that particles become increasingly less likely to be set in motion by flow forces as a scour hole increases in depth. Computational and experimental examples of the scoured bed around a cylindrical pier are presented.« less
Winding solutions for the two-particle system in ? gravity
NASA Astrophysics Data System (ADS)
Welling, M.
1998-03-01
We use a computer to follow the evolution of two gravitating particles in a (2 + 1)-dimensional closed universe. In a closed universe there is enough energy to produce a Gott-pair, i.e. a pair of particles with tachyonic centre of mass, from regular initial data. We study such a pair and find that they can wind around each other with ever increasing momentum. As was shown by 't Hooft, the universe must crunch before any closed timelike curve can be traversed. We study the two-particle system and quantize it, long before this crunch happens, in the high-momentum limit. We find that both the relevant configuration variable and its conjugate momentum become discretized.
Homology groups for particles on one-connected graphs
NASA Astrophysics Data System (ADS)
MaciÄ Żek, Tomasz; Sawicki, Adam
2017-06-01
We present a mathematical framework for describing the topology of configuration spaces for particles on one-connected graphs. In particular, we compute the homology groups over integers for different classes of one-connected graphs. Our approach is based on some fundamental combinatorial properties of the configuration spaces, Mayer-Vietoris sequences for different parts of configuration spaces, and some limited use of discrete Morse theory. As one of the results, we derive the closed-form formulae for ranks of the homology groups for indistinguishable particles on tree graphs. We also give a detailed discussion of the second homology group of the configuration space of both distinguishable and indistinguishable particles. Our motivation is the search for new kinds of quantum statistics.
Quantum Walk Schemes for Universal Quantum Computation
NASA Astrophysics Data System (ADS)
Underwood, Michael S.
Random walks are a powerful tool for the efficient implementation of algorithms in classical computation. Their quantum-mechanical analogues, called quantum walks, hold similar promise. Quantum walks provide a model of quantum computation that has recently been shown to be equivalent in power to the standard circuit model. As in the classical case, quantum walks take place on graphs and can undergo discrete or continuous evolution, though quantum evolution is unitary and therefore deterministic until a measurement is made. This thesis considers the usefulness of continuous-time quantum walks to quantum computation from the perspectives of both their fundamental power under various formulations, and their applicability in practical experiments. In one extant scheme, logical gates are effected by scattering processes. The results of an exhaustive search for single-qubit operations in this model are presented. It is shown that the number of distinct operations increases exponentially with the number of vertices in the scattering graph. A catalogue of all graphs on up to nine vertices that implement single-qubit unitaries at a specific set of momenta is included in an appendix. I develop a novel scheme for universal quantum computation called the discontinuous quantum walk, in which a continuous-time quantum walker takes discrete steps of evolution via perfect quantum state transfer through small 'widget' graphs. The discontinuous quantum-walk scheme requires an exponentially sized graph, as do prior discrete and continuous schemes. To eliminate the inefficient vertex resource requirement, a computation scheme based on multiple discontinuous walkers is presented. In this model, n interacting walkers inhabiting a graph with 2n vertices can implement an arbitrary quantum computation on an input of length n, an exponential savings over previous universal quantum walk schemes. This is the first quantum walk scheme that allows for the application of quantum error correction. The many-particle quantum walk can be viewed as a single quantum walk undergoing perfect state transfer on a larger weighted graph, obtained via equitable partitioning. I extend this formalism to non-simple graphs. Examples of the application of equitable partitioning to the analysis of quantum walks and many-particle quantum systems are discussed.
Extension of a coarse grained particle method to simulate heat transfer in fluidized beds
Lu, Liqiang; Morris, Aaron; Li, Tingwen; ...
2017-04-18
The heat transfer in a gas-solids fluidized bed is simulated with computational fluid dynamic-discrete element method (CFD-DEM) and coarse grained particle method (CGPM). In CGPM fewer numerical particles and their collisions are tracked by lumping several real particles into a computational parcel. Here, the assumption is that the real particles inside a coarse grained particle (CGP) are made from same species and share identical physical properties including density, diameter and temperature. The parcel-fluid convection term in CGPM is calculated using the same method as in DEM. For all other heat transfer mechanisms, we derive in this study mathematical expressions thatmore » relate the new heat transfer terms for CGPM to those traditionally derived in DEM. This newly derived CGPM model is verified and validated by comparing the results with CFD-DEM simulation results and experiment data. The numerical results compare well with experimental data for both hydrodynamics and temperature profiles. Finally, the proposed CGPM model can be used for fast and accurate simulations of heat transfer in large scale gas-solids fluidized beds.« less
Extension of a coarse grained particle method to simulate heat transfer in fluidized beds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Morris, Aaron; Li, Tingwen
The heat transfer in a gas-solids fluidized bed is simulated with computational fluid dynamic-discrete element method (CFD-DEM) and coarse grained particle method (CGPM). In CGPM fewer numerical particles and their collisions are tracked by lumping several real particles into a computational parcel. Here, the assumption is that the real particles inside a coarse grained particle (CGP) are made from same species and share identical physical properties including density, diameter and temperature. The parcel-fluid convection term in CGPM is calculated using the same method as in DEM. For all other heat transfer mechanisms, we derive in this study mathematical expressions thatmore » relate the new heat transfer terms for CGPM to those traditionally derived in DEM. This newly derived CGPM model is verified and validated by comparing the results with CFD-DEM simulation results and experiment data. The numerical results compare well with experimental data for both hydrodynamics and temperature profiles. Finally, the proposed CGPM model can be used for fast and accurate simulations of heat transfer in large scale gas-solids fluidized beds.« less
Numerical investigation of fluid-particle interactions for embolic stroke
NASA Astrophysics Data System (ADS)
Mukherjee, Debanjan; Padilla, Jose; Shadden, Shawn C.
2016-04-01
Roughly one-third of all strokes are caused by an embolus traveling to a cerebral artery and blocking blood flow in the brain. The objective of this study is to gain a detailed understanding of the dynamics of embolic particles within arteries. Patient computed tomography image is used to construct a three-dimensional model of the carotid bifurcation. An idealized carotid bifurcation model of same vessel diameters was also constructed for comparison. Blood flow velocities and embolic particle trajectories are resolved using a coupled Euler-Lagrange approach. Blood is modeled as a Newtonian fluid, discretized using the finite volume method, with physiologically appropriate inflow and outflow boundary conditions. The embolus trajectory is modeled using Lagrangian particle equations accounting for embolus interaction with blood as well as vessel wall. Both one- and two-way fluid-particle coupling are considered, the latter being implemented using momentum sources augmented to the discretized flow equations. It was observed that for small-to-moderate particle sizes (relative to vessel diameters), the estimated particle distribution ratio—with and without the inclusion of two-way fluid-particle momentum exchange—were found to be similar. The maximum observed differences in distribution ratio with and without the coupling were found to be higher for the idealized bifurcation model. Additionally, the distribution was found to be reasonably matching the volumetric flow distribution for the idealized model, while a notable deviation from volumetric flow was observed in the anatomical model. It was also observed from an analysis of particle path lines that particle interaction with helical flow, characteristic of anatomical vasculature models, could play a prominent role in transport of embolic particle. The results indicate therefore that flow helicity could be an important hemodynamic indicator for analysis of embolus particle transport. Additionally, in the presence of helical flow, and vessel curvature, inclusion of two-way momentum exchange was found to have a secondary effect for transporting small to moderate embolus particles—and one-way coupling could be used as a reasonable approximation, thereby causing substantial savings in computational resources.
MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capabilitymore » of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.« less
A multilevel-skin neighbor list algorithm for molecular dynamics simulation
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Zhao, Mingcan; Hou, Chaofeng; Ge, Wei
2018-01-01
Searching of the interaction pairs and organization of the interaction processes are important steps in molecular dynamics (MD) algorithms and are critical to the overall efficiency of the simulation. Neighbor lists are widely used for these steps, where thicker skin can reduce the frequency of list updating but is discounted by more computation in distance check for the particle pairs. In this paper, we propose a new neighbor-list-based algorithm with a precisely designed multilevel skin which can reduce unnecessary computation on inter-particle distances. The performance advantages over traditional methods are then analyzed against the main simulation parameters on Intel CPUs and MICs (many integrated cores), and are clearly demonstrated. The algorithm can be generalized for various discrete simulations using neighbor lists.
The Universe according to Schroedinger and Milo
NASA Astrophysics Data System (ADS)
Wolff, Milo
2009-10-01
The puzzling electron is due to the belief that it is a discrete particle. Schroedinger, (1937) eliminated discrete particles writing: What we observe as material bodies and forces are nothing but shapes and variations in the structure of space. Particles are just schaumkommen (appearances). Thus he rejected wave-particle duality. Schroedinger's concept was developed by Milo Wolff using a Scalar Wave Equation in 3D quantum space to find wave solutions. The resulting Wave Structure of Matter (WSM) contains all the electron's properties including the Schroedinger Equation. Further, Newton's Law F=ma is no longer a puzzle; It originates from Mach's principle of inertia (1883) that depends on the space medium and the WSM. These the origin of all the Natural Laws. Carver Mead (1999) at CalTech used the WSM to design Intel micro-chips and to correct errors of Maxwell's Equations. Applications of the WSM describe matter at molecular dimensions: Industrial alloys, catalysts, biology and medicine, molecular computers and memories. See book ``Schroedinger's Universe'' - at Amazon.com. Pioneers of the WSM are growing rapidly. Some are: SpaceAndMotion.com, QuantumMatter.com, treeincarnation.com/audio/milowolff.htm, daugerresearch.com/orbitals/index.shtml, glafreniere.com/matter.html =A new Universe.
NASA Astrophysics Data System (ADS)
Zohdi, T. I.
2017-07-01
A key part of emerging advanced additive manufacturing methods is the deposition of specialized particulate mixtures of materials on substrates. For example, in many cases these materials are polydisperse powder mixtures whereby one set of particles is chosen with the objective to electrically, thermally or mechanically functionalize the overall mixture material and another set of finer-scale particles serves as an interstitial filler/binder. Often, achieving controllable, precise, deposition is difficult or impossible using mechanical means alone. It is for this reason that electromagnetically-driven methods are being pursued in industry, whereby the particles are ionized and an electromagnetic field is used to guide them into place. The goal of this work is to develop a model and simulation framework to investigate the behavior of a deposition as a function of an applied electric field. The approach develops a modular discrete-element type method for the simulation of the particle dynamics, which provides researchers with a framework to construct computational tools for this growing industry.
A patient-specific CFD-based study of embolic particle transport for stroke
NASA Astrophysics Data System (ADS)
Mukherjee, Debanjan; Shadden, Shawn C.
2014-11-01
Roughly 1/3 of all strokes are caused by an embolus traveling to a cerebral artery and blocking blood flow in the brain. A detailed understanding of the dynamics of embolic particles within arteries is the basis for this study. Blood flow velocities and emboli trajectories are resolved using a coupled Euler-Lagrange approach. Computer model of the major arteries is extracted from patient image data. Blood is modeled as a Newtonian fluid, discretized using the Finite Volume method, with physiologically appropriate inflow and outflow boundary conditions. The embolus trajectory is modeled using Lagrangian particle equations accounting for embolus interaction with blood as well as vessel wall. Both one and two way fluid-particle coupling are considered, the latter being implemented using momentum sources augmented to the discretized flow equations. The study determines individual embolus path up to arteries supplying the brain, and compares the size-dependent distribution of emboli amongst vessels superior to the aortic-arch, and the role of fully coupled blood-embolus interactions in modifying both trajectory and distribution when compared with one-way coupling. Specifically for intermediate particle sizes the model developed will better characterize the risks for embolic stroke. American Heart Association (AHA) Grant: Embolic Stroke: Anatomic and Physiologic Insights from Image-Based CFD.
NASA Astrophysics Data System (ADS)
Dorostkar, Omid; Guyer, Robert A.; Johnson, Paul A.; Marone, Chris; Carmeliet, Jan
2017-05-01
The presence of fault gouge has considerable influence on slip properties of tectonic faults and the physics of earthquake rupture. The presence of fluids within faults also plays a significant role in faulting and earthquake processes. In this paper, we present 3-D discrete element simulations of dry and fluid-saturated granular fault gouge and analyze the effect of fluids on stick-slip behavior. Fluid flow is modeled using computational fluid dynamics based on the Navier-Stokes equations for an incompressible fluid and modified to take into account the presence of particles. Analysis of a long time train of slip events shows that the (1) drop in shear stress, (2) compaction of granular layer, and (3) the kinetic energy release during slip all increase in magnitude in the presence of an incompressible fluid, compared to dry conditions. We also observe that on average, the recurrence interval between slip events is longer for fluid-saturated granular fault gouge compared to the dry case. This observation is consistent with the occurrence of larger events in the presence of fluid. It is found that the increase in kinetic energy during slip events for saturated conditions can be attributed to the increased fluid flow during slip. Our observations emphasize the important role that fluid flow and fluid-particle interactions play in tectonic fault zones and show in particular how discrete element method (DEM) models can help understand the hydromechanical processes that dictate fault slip.
Self-organizing magnetic beads for biomedical applications
NASA Astrophysics Data System (ADS)
Gusenbauer, Markus; Kovacs, Alexander; Reichel, Franz; Exl, Lukas; Bance, Simon; Özelt, Harald; Schrefl, Thomas
2012-03-01
In the field of biomedicine magnetic beads are used for drug delivery and to treat hyperthermia. Here we propose to use self-organized bead structures to isolate circulating tumor cells using lab-on-chip technologies. Typically blood flows past microposts functionalized with antibodies for circulating tumor cells. Creating these microposts with interacting magnetic beads makes it possible to tune the geometry in size, position and shape. We developed a simulation tool that combines micromagnetics and discrete particle dynamics, in order to design micropost arrays made of interacting beads. The simulation takes into account the viscous drag of the blood flow, magnetostatic interactions between the magnetic beads and gradient forces from external aligned magnets. We developed a particle-particle particle-mesh method for effective computation of the magnetic force and torque acting on the particles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zuwei; Zhao, Haibo, E-mail: klinsmannzhb@163.com; Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule providesmore » a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are demonstrated in a physically realistic Brownian coagulation case. The computational accuracy is validated with benchmark solution of discrete-sectional method. The simulation results show that the comprehensive approach can attain very favorable improvement in cost without sacrificing computational accuracy.« less
NASA Technical Reports Server (NTRS)
Bi, Lei; Yang, Ping; Kattawar, George W.; Mishchenko, Michael I.
2012-01-01
Three terms, ''Waterman's T-matrix method'', ''extended boundary condition method (EBCM)'', and ''null field method'', have been interchangeable in the literature to indicate a method based on surface integral equations to calculate the T-matrix. Unlike the previous method, the invariant imbedding method (IIM) calculates the T-matrix by the use of a volume integral equation. In addition, the standard separation of variables method (SOV) can be applied to compute the T-matrix of a sphere centered at the origin of the coordinate system and having a maximal radius such that the sphere remains inscribed within a nonspherical particle. This study explores the feasibility of a numerical combination of the IIM and the SOV, hereafter referred to as the IIMþSOV method, for computing the single-scattering properties of nonspherical dielectric particles, which are, in general, inhomogeneous. The IIMþSOV method is shown to be capable of solving light-scattering problems for large nonspherical particles where the standard EBCM fails to converge. The IIMþSOV method is flexible and applicable to inhomogeneous particles and aggregated nonspherical particles (overlapped circumscribed spheres) representing a challenge to the standard superposition T-matrix method. The IIMþSOV computational program, developed in this study, is validated against EBCM simulated spheroid and cylinder cases with excellent numerical agreement (up to four decimal places). In addition, solutions for cylinders with large aspect ratios, inhomogeneous particles, and two-particle systems are compared with results from discrete dipole approximation (DDA) computations, and comparisons with the improved geometric-optics method (IGOM) are found to be quite encouraging.
Quasi-three-dimensional particle imaging with digital holography.
Kemppinen, Osku; Heinson, Yuli; Berg, Matthew
2017-05-01
In this work, approximate three-dimensional structures of microparticles are generated with digital holography using an automated focus method. This is done by stacking a collection of silhouette-like images of a particle reconstructed from a single in-line hologram. The method enables estimation of the particle size in the longitudinal and transverse dimensions. Using the discrete dipole approximation, the method is tested computationally by simulating holograms for a variety of particles and attempting to reconstruct the known three-dimensional structure. It is found that poor longitudinal resolution strongly perturbs the reconstructed structure, yet the method does provide an approximate sense for the structure's longitudinal dimension. The method is then applied to laboratory measurements of holograms of single microparticles and their scattering patterns.
Coherent Backscattering by Particulate Planetary Media of Nonspherical Particles
NASA Astrophysics Data System (ADS)
Muinonen, Karri; Penttila, Antti; Wilkman, Olli; Videen, Gorden
2014-11-01
The so-called radiative-transfer coherent-backscattering method (RT-CB) has been put forward as a practical Monte Carlo method to compute multiple scattering in discrete random media mimicking planetary regoliths (K. Muinonen, Waves in Random Media 14, p. 365, 2004). In RT-CB, the interaction between the discrete scatterers takes place in the far-field approximation and the wave propagation faces exponential extinction. There is a significant constraint in the RT-CB method: it has to be assumed that the form of the scattering matrix is that of the spherical particle. We aim to extend the RT-CB method to nonspherical single particles showing significant depolarization characteristics. First, ensemble-averaged single-scattering albedos and phase matrices of nonspherical particles are matched using a phenomenological radiative-transfer model within a microscopic volume element. Second, the phenomenologial single-particle model is incorporated into the Monte Carlo RT-CB method. In the ray tracing, the electromagnetic phases within the microscopic volume elements are omitted as having negligible lengths, whereas the phases are duly accounted for in the paths between two or more microscopic volume elements. We assess the computational feasibility of the extended RT-CB method and show preliminary results for particulate media mimicking planetary regoliths. The present work can be utilized in the interpretation of astronomical observations of asteroids and other planetary objects. In particular, the work sheds light on the depolarization characteristics of planetary regoliths at small phase angles near opposition. The research has been partially funded by the ERC Advanced Grant No 320773 entitled “Scattering and Absorption of Electromagnetic Waves in Particulate Media” (SAEMPL), by the Academy of Finland (contract 257966), NASA Outer Planets Research Program (contract NNX10AP93G), and NASA Lunar Advanced Science and Exploration Research Program (contract NNX11AB25G).
NASA Astrophysics Data System (ADS)
Alpers, Andreas; Gritzmann, Peter
2018-03-01
We consider the problem of reconstructing the paths of a set of points over time, where, at each of a finite set of moments in time the current positions of points in space are only accessible through some small number of their x-rays. This particular particle tracking problem, with applications, e.g. in plasma physics, is the basic problem in dynamic discrete tomography. We introduce and analyze various different algorithmic models. In particular, we determine the computational complexity of the problem (and various of its relatives) and derive algorithms that can be used in practice. As a byproduct we provide new results on constrained variants of min-cost flow and matching problems.
DynamO: a free O(N) general event-driven molecular dynamics simulator.
Bannerman, M N; Sargant, R; Lue, L
2011-11-30
Molecular dynamics algorithms for systems of particles interacting through discrete or "hard" potentials are fundamentally different to the methods for continuous or "soft" potential systems. Although many software packages have been developed for continuous potential systems, software for discrete potential systems based on event-driven algorithms are relatively scarce and specialized. We present DynamO, a general event-driven simulation package, which displays the optimal O(N) asymptotic scaling of the computational cost with the number of particles N, rather than the O(N) scaling found in most standard algorithms. DynamO provides reference implementations of the best available event-driven algorithms. These techniques allow the rapid simulation of both complex and large (>10(6) particles) systems for long times. The performance of the program is benchmarked for elastic hard sphere systems, homogeneous cooling and sheared inelastic hard spheres, and equilibrium Lennard-Jones fluids. This software and its documentation are distributed under the GNU General Public license and can be freely downloaded from http://marcusbannerman.co.uk/dynamo. Copyright © 2011 Wiley Periodicals, Inc.
Lu, Liqiang; Liu, Xiaowen; Li, Tingwen; ...
2017-08-12
For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Liu, Xiaowen; Li, Tingwen
For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less
Parallel multiscale simulations of a brain aneurysm
Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em
2012-01-01
Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multi-scale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier-Stokes solver εκ αr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers ( εκ αr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work. PMID:23734066
Parallel multiscale simulations of a brain aneurysm.
Grinberg, Leopold; Fedosov, Dmitry A; Karniadakis, George Em
2013-07-01
Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multi-scale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier-Stokes solver εκ αr . The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers ( εκ αr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work.
Parallel multiscale simulations of a brain aneurysm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em, E-mail: george_karniadakis@brown.edu
2013-07-01
Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multiscale simulations of platelet depositions on the wall of a brain aneurysm.more » The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier–Stokes solver NεκTαr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers (NεκTαr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300 K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work.« less
Landázuri, Andrea C.; Sáez, A. Eduardo; Anthony, T. Renée
2016-01-01
This work presents fluid flow and particle trajectory simulation studies to determine the aspiration efficiency of a horizontally oriented occupational air sampler using computational fluid dynamics (CFD). Grid adaption and manual scaling of the grids were applied to two sampler prototypes based on a 37-mm cassette. The standard k–ε model was used to simulate the turbulent air flow and a second order streamline-upwind discretization scheme was used to stabilize convective terms of the Navier–Stokes equations. Successively scaled grids for each configuration were created manually and by means of grid adaption using the velocity gradient in the main flow direction. Solutions were verified to assess iterative convergence, grid independence and monotonic convergence. Particle aspiration efficiencies determined for both prototype samplers were undistinguishable, indicating that the porous filter does not play a noticeable role in particle aspiration. Results conclude that grid adaption is a powerful tool that allows to refine specific regions that require lots of detail and therefore better resolve flow detail. It was verified that adaptive grids provided a higher number of locations with monotonic convergence than the manual grids and required the least computational effort. PMID:26949268
NASA Astrophysics Data System (ADS)
Camporeale, E.; Delzanno, G. L.; Bergen, B. K.; Moulton, J. D.
2016-01-01
We describe a spectral method for the numerical solution of the Vlasov-Poisson system where the velocity space is decomposed by means of an Hermite basis, and the configuration space is discretized via a Fourier decomposition. The novelty of our approach is an implicit time discretization that allows exact conservation of charge, momentum and energy. The computational efficiency and the cost-effectiveness of this method are compared to the fully-implicit PIC method recently introduced by Markidis and Lapenta (2011) and Chen et al. (2011). The following examples are discussed: Langmuir wave, Landau damping, ion-acoustic wave, two-stream instability. The Fourier-Hermite spectral method can achieve solutions that are several orders of magnitude more accurate at a fraction of the cost with respect to PIC.
Automatic mesh refinement and parallel load balancing for Fokker-Planck-DSMC algorithm
NASA Astrophysics Data System (ADS)
Küchlin, Stephan; Jenny, Patrick
2018-06-01
Recently, a parallel Fokker-Planck-DSMC algorithm for rarefied gas flow simulation in complex domains at all Knudsen numbers was developed by the authors. Fokker-Planck-DSMC (FP-DSMC) is an augmentation of the classical DSMC algorithm, which mitigates the near-continuum deficiencies in terms of computational cost of pure DSMC. At each time step, based on a local Knudsen number criterion, the discrete DSMC collision operator is dynamically switched to the Fokker-Planck operator, which is based on the integration of continuous stochastic processes in time, and has fixed computational cost per particle, rather than per collision. In this contribution, we present an extension of the previous implementation with automatic local mesh refinement and parallel load-balancing. In particular, we show how the properties of discrete approximations to space-filling curves enable an efficient implementation. Exemplary numerical studies highlight the capabilities of the new code.
Material point method modeling in oil and gas reservoirs
Vanderheyden, William Brian; Zhang, Duan
2016-06-28
A computer system and method of simulating the behavior of an oil and gas reservoir including changes in the margins of frangible solids. A system of equations including state equations such as momentum, and conservation laws such as mass conservation and volume fraction continuity, are defined and discretized for at least two phases in a modeled volume, one of which corresponds to frangible material. A material point model technique for numerically solving the system of discretized equations, to derive fluid flow at each of a plurality of mesh nodes in the modeled volume, and the velocity of at each of a plurality of particles representing the frangible material in the modeled volume. A time-splitting technique improves the computational efficiency of the simulation while maintaining accuracy on the deformation scale. The method can be applied to derive accurate upscaled model equations for larger volume scale simulations.
Studies of Particle Packings in Mixtures of Pharmaceutical Excipients
NASA Astrophysics Data System (ADS)
Bentham, Craig; Dutt, Meenakshi; Hancock, Bruno; Elliott, James
2005-03-01
Pharmaceutical powder blends used to generate tablets are complex multicomponent mixtures of the drug powder and excipients which facilitate the delivery of the required drug. The individual constituents of these blends can be noncohesive and cohesive powders. We study the geometric and mechanical characteristics of idealized mixtures of excipient particle packings, for a small but representative number of dry noncohesive particles, generated via gravitational compaction followed by uniaxial compaction. We discuss particle packings in 2- and 3- component mixtures of microcrystalline cellulose (MCC) & lactose and MCC, starch & lactose, respectively. We have computed the evolution of the force and stress distributions in monodisperse and polydisperse mixtures comprised of equal parts of each excipient; comparisons are made with results for particles packings of pure blends of MCC and lactose. We also compute the stress-strain relations for these mixtures. In order to obtain insight into details of the particle packings, we calculate the coordination number, packing fraction, radial distribution functions and contact angle distributions for the various mixtures. The numerical experiments have been performed on spheroidal idealizations of the excipient grains using Discrete Element Method simulations (Dutt et al., 2004 to be published).
Modeling and Simulation of Cardiogenic Embolic Particle Transport to the Brain
NASA Astrophysics Data System (ADS)
Mukherjee, Debanjan; Jani, Neel; Shadden, Shawn C.
2015-11-01
Emboli are aggregates of cells, proteins, or fatty material, which travel along arteries distal to the point of their origin, and can potentially block blood flow to the brain, causing stroke. This is a prominent mechanism of stroke, accounting for about a third of all cases, with the heart being a prominent source of these emboli. This work presents our investigations towards developing numerical simulation frameworks for modeling the transport of embolic particles originating from the heart along the major arteries supplying the brain. The simulations are based on combining discrete particle method with image based computational fluid dynamics. Simulations of unsteady, pulsatile hemodynamics, and embolic particle transport within patient-specific geometries, with physiological boundary conditions, are presented. The analysis is focused on elucidating the distribution of particles, transport of particles in the head across the major cerebral arteries connected at the Circle of Willis, the role of hemodynamic variables on the particle trajectories, and the effect of considering one-way vs. two-way coupling methods for the particle-fluid momentum exchange. These investigations are aimed at advancing our understanding of embolic stroke using computational fluid dynamics techniques. This research was supported by the American Heart Association grant titled ``Embolic Stroke: Anatomic and Physiologic Insights from Image-Based CFD.''
NASA Technical Reports Server (NTRS)
Bi, Lei; Yang, Ping; Kattawar, George W.; Mishchenko, Michael I.
2013-01-01
The extended boundary condition method (EBCM) and invariant imbedding method (IIM) are two fundamentally different T-matrix methods for the solution of light scattering by nonspherical particles. The standard EBCM is very efficient but encounters a loss of precision when the particle size is large, the maximum size being sensitive to the particle aspect ratio. The IIM can be applied to particles in a relatively large size parameter range but requires extensive computational time due to the number of spherical layers in the particle volume discretization. A numerical combination of the EBCM and the IIM (hereafter, the EBCM+IIM) is proposed to overcome the aforementioned disadvantages of each method. Even though the EBCM can fail to obtain the T-matrix of a considered particle, it is valuable for decreasing the computational domain (i.e., the number of spherical layers) of the IIM by providing the initial T-matrix associated with an iterative procedure in the IIM. The EBCM+IIM is demonstrated to be more efficient than the IIM in obtaining the optical properties of large size parameter particles beyond the convergence limit of the EBCM. The numerical performance of the EBCM+IIM is illustrated through representative calculations in spheroidal and cylindrical particle cases.
Numerical study of particle deposition and scaling in dust exhaust of cyclone separator
NASA Astrophysics Data System (ADS)
Xu, W. W.; Li, Q.; Zhao, Y. L.; Wang, J. J.; Jin, Y. H.
2016-05-01
The solid particles accumulation in the dust exhaust cone area of the cyclone separator can cause the wall wear. This undoubtedly prevents the flue gas turbine from long period and safe operation. So it is important to study the mechanism how the particles deposited and scale on dust exhaust cone area of the cyclone separator. Numerical simulations of gas-solid flow field have been carried out in a single tube in the third cyclone separator. The three-dimensionally coupled computational fluid dynamic (CFD) technology and the modified Discrete Phase Model (DPM) are adopted to model the gas-solid two-phase flow. The results show that with the increase of the operating temperature and processing capacity, the particle sticking possibility near the cone area will rise. The sticking rates will decrease when the particle diameter becomes bigger.
Multiple scattering in planetary regoliths using first-order incoherent interactions
NASA Astrophysics Data System (ADS)
Muinonen, Karri; Markkanen, Johannes; Väisänen, Timo; Penttilä, Antti
2017-10-01
We consider scattering of light by a planetary regolith modeled using discrete random media of spherical particles. The size of the random medium can range from microscopic sizes of a few wavelengths to macroscopic sizes approaching infinity. The size of the particles is assumed to be of the order of the wavelength. We extend the numerical Monte Carlo method of radiative transfer and coherent backscattering (RT-CB) to the case of dense packing of particles. We adopt the ensemble-averaged first-order incoherent extinction, scattering, and absorption characteristics of a volume element of particles as input for the RT-CB. The volume element must be larger than the wavelength but smaller than the mean free path length of incoherent extinction. In the radiative transfer part, at each absorption and scattering process, we account for absorption with the help of the single-scattering albedo and peel off the Stokes parameters of radiation emerging from the medium in predefined scattering angles. We then generate a new scattering direction using the joint probability density for the local polar and azimuthal scattering angles. In the coherent backscattering part, we utilize amplitude scattering matrices along the radiative-transfer path and the reciprocal path, and utilize the reciprocity of electromagnetic waves to verify the computation. We illustrate the incoherent volume-element scattering characteristics and compare the dense-medium RT-CB to asymptotically exact results computed using the Superposition T-matrix method (STMM). We show that the dense-medium RT-CB compares favorably to the STMM results for the current cases of sparse and dense discrete random media studied. The novel method can be applied in modeling light scattering by the surfaces of asteroids and other airless solar system objects, including UV-Vis-NIR spectroscopy, photometry, polarimetry, and radar scattering problems.Acknowledgments. Research supported by European Research Council with Advanced Grant No. 320773 SAEMPL, Scattering and Absorption of ElectroMagnetic waves in ParticuLate media. Computational resources provided by CSC - IT Centre for Science Ltd, Finland.
NASA Astrophysics Data System (ADS)
Ma, L. X.; Tan, J. Y.; Zhao, J. M.; Wang, F. Q.; Wang, C. A.; Wang, Y. Y.
2017-07-01
Due to the dependent scattering and absorption effects, the radiative transfer equation (RTE) may not be suitable for dealing with radiative transfer in dense discrete random media. This paper continues previous research on multiple and dependent scattering in densely packed discrete particle systems, and puts emphasis on the effects of particle complex refractive index. The Mueller matrix elements of the scattering system with different complex refractive indexes are obtained by both electromagnetic method and radiative transfer method. The Maxwell equations are directly solved based on the superposition T-matrix method, while the RTE is solved by the Monte Carlo method combined with the hard sphere model in the Percus-Yevick approximation (HSPYA) to consider the dependent scattering effects. The results show that for densely packed discrete random media composed of medium size parameter particles (equals 6.964 in this study), the demarcation line between independent and dependent scattering has remarkable connections with the particle complex refractive index. With the particle volume fraction increase to a certain value, densely packed discrete particles with higher refractive index contrasts between the particles and host medium and higher particle absorption indexes are more likely to show stronger dependent characteristics. Due to the failure of the extended Rayleigh-Debye scattering condition, the HSPYA has weak effect on the dependent scattering correction at large phase shift parameters.
Fish Passage though Hydropower Turbines: Simulating Blade Strike using the Discrete Element Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richmond, Marshall C.; Romero Gomez, Pedro DJ
mong the hazardous hydraulic conditions affecting anadromous and resident fish during their passage though turbine flows, two are believed to cause considerable injury and mortality: collision on moving blades and decompression. Several methods are currently available to evaluate these stressors in installed turbines, i.e. using live fish or autonomous sensor devices, and in reduced-scale physical models, i.e. registering collisions from plastic beads. However, a priori estimates with computational modeling approaches applied early in the process of turbine design can facilitate the development of fish-friendly turbines. In the present study, we evaluated the frequency of blade strike and nadir pressure environmentmore » by modeling potential fish trajectories with the Discrete Element Method (DEM) applied to fish-like composite particles. In the DEM approach, particles are subjected to realistic hydraulic conditions simulated with computational fluid dynamics (CFD), and particle-structure interactions—representing fish collisions with turbine blades—are explicitly recorded and accounted for in the calculation of particle trajectories. We conducted transient CFD simulations by setting the runner in motion and allowing for better turbulence resolution, a modeling improvement over the conventional practice of simulating the system in steady state which was also done here. While both schemes yielded comparable bulk hydraulic performance, transient conditions exhibited a visual improvement in describing flow variability. We released streamtraces (steady flow solution) and DEM particles (transient solution) at the same location from where sensor fish (SF) have been released in field studies of the modeled turbine unit. The streamtrace-based results showed a better agreement with SF data than the DEM-based nadir pressures did because the former accounted for the turbulent dispersion at the intake but the latter did not. However, the DEM-based strike frequency is more representative of blade-strike probability than the steady solution is, mainly because DEM particles accounted for the full fish length, thus resolving (instead of modeling) the collision event.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorostkar, Omid; Guyer, Robert A.; Johnson, Paul A.
The presence of fault gouge has considerable influence on slip properties of tectonic faults and the physics of earthquake rupture. The presence of fluids within faults also plays a significant role in faulting and earthquake processes. In this study, we present 3-D discrete element simulations of dry and fluid-saturated granular fault gouge and analyze the effect of fluids on stick-slip behavior. Fluid flow is modeled using computational fluid dynamics based on the Navier-Stokes equations for an incompressible fluid and modified to take into account the presence of particles. Analysis of a long time train of slip events shows that themore » (1) drop in shear stress, (2) compaction of granular layer, and (3) the kinetic energy release during slip all increase in magnitude in the presence of an incompressible fluid, compared to dry conditions. We also observe that on average, the recurrence interval between slip events is longer for fluid-saturated granular fault gouge compared to the dry case. This observation is consistent with the occurrence of larger events in the presence of fluid. It is found that the increase in kinetic energy during slip events for saturated conditions can be attributed to the increased fluid flow during slip. Finally, our observations emphasize the important role that fluid flow and fluid-particle interactions play in tectonic fault zones and show in particular how discrete element method (DEM) models can help understand the hydromechanical processes that dictate fault slip.« less
Dorostkar, Omid; Guyer, Robert A.; Johnson, Paul A.; ...
2017-05-01
The presence of fault gouge has considerable influence on slip properties of tectonic faults and the physics of earthquake rupture. The presence of fluids within faults also plays a significant role in faulting and earthquake processes. In this study, we present 3-D discrete element simulations of dry and fluid-saturated granular fault gouge and analyze the effect of fluids on stick-slip behavior. Fluid flow is modeled using computational fluid dynamics based on the Navier-Stokes equations for an incompressible fluid and modified to take into account the presence of particles. Analysis of a long time train of slip events shows that themore » (1) drop in shear stress, (2) compaction of granular layer, and (3) the kinetic energy release during slip all increase in magnitude in the presence of an incompressible fluid, compared to dry conditions. We also observe that on average, the recurrence interval between slip events is longer for fluid-saturated granular fault gouge compared to the dry case. This observation is consistent with the occurrence of larger events in the presence of fluid. It is found that the increase in kinetic energy during slip events for saturated conditions can be attributed to the increased fluid flow during slip. Finally, our observations emphasize the important role that fluid flow and fluid-particle interactions play in tectonic fault zones and show in particular how discrete element method (DEM) models can help understand the hydromechanical processes that dictate fault slip.« less
A FFT-based formulation for discrete dislocation dynamics in heterogeneous media
NASA Astrophysics Data System (ADS)
Bertin, N.; Capolungo, L.
2018-02-01
In this paper, an extension of the DDD-FFT approach presented in [1] is developed for heterogeneous elasticity. For such a purpose, an iterative spectral formulation in which convolutions are calculated in the Fourier space is developed to solve for the mechanical state associated with the discrete eigenstrain-based microstructural representation. With this, the heterogeneous DDD-FFT approach is capable of treating anisotropic and heterogeneous elasticity in a computationally efficient manner. In addition, a GPU implementation is presented to allow for further acceleration. As a first example, the approach is used to investigate the interaction between dislocations and second-phase particles, thereby demonstrating its ability to inherently incorporate image forces arising from elastic inhomogeneities.
A splitting integration scheme for the SPH simulation of concentrated particle suspensions
NASA Astrophysics Data System (ADS)
Bian, Xin; Ellero, Marco
2014-01-01
Simulating nearly contacting solid particles in suspension is a challenging task due to the diverging behavior of short-range lubrication forces, which pose a serious time-step limitation for explicit integration schemes. This general difficulty limits severely the total duration of simulations of concentrated suspensions. Inspired by the ideas developed in [S. Litvinov, M. Ellero, X.Y. Hu, N.A. Adams, J. Comput. Phys. 229 (2010) 5457-5464] for the simulation of highly dissipative fluids, we propose in this work a splitting integration scheme for the direct simulation of solid particles suspended in a Newtonian liquid. The scheme separates the contributions of different forces acting on the solid particles. In particular, intermediate- and long-range multi-body hydrodynamic forces, which are computed from the discretization of the Navier-Stokes equations using the smoothed particle hydrodynamics (SPH) method, are taken into account using an explicit integration; for short-range lubrication forces, velocities of pairwise interacting solid particles are updated implicitly by sweeping over all the neighboring pairs iteratively, until convergence in the solution is obtained. By using the splitting integration, simulations can be run stably and efficiently up to very large solid particle concentrations. Moreover, the proposed scheme is not limited to the SPH method presented here, but can be easily applied to other simulation techniques employed for particulate suspensions.
Theocharis, G; Boechler, N; Kevrekidis, P G; Job, S; Porter, Mason A; Daraio, C
2010-11-01
We present a systematic study of the existence and stability of discrete breathers that are spatially localized in the bulk of a one-dimensional chain of compressed elastic beads that interact via Hertzian contact. The chain is diatomic, consisting of a periodic arrangement of heavy and light spherical particles. We examine two families of discrete gap breathers: (1) an unstable discrete gap breather that is centered on a heavy particle and characterized by a symmetric spatial energy profile and (2) a potentially stable discrete gap breather that is centered on a light particle and is characterized by an asymmetric spatial energy profile. We investigate their existence, structure, and stability throughout the band gap of the linear spectrum and classify them into four regimes: a regime near the lower optical band edge of the linear spectrum, a moderately discrete regime, a strongly discrete regime that lies deep within the band gap of the linearized version of the system, and a regime near the upper acoustic band edge. We contrast discrete breathers in anharmonic Fermi-Pasta-Ulam (FPU)-type diatomic chains with those in diatomic granular crystals, which have a tensionless interaction potential between adjacent particles, and note that the asymmetric nature of the tensionless interaction potential can lead to hybrid bulk-surface localized solutions.
NASA Astrophysics Data System (ADS)
Theocharis, G.; Boechler, N.; Kevrekidis, P. G.; Job, S.; Porter, Mason A.; Daraio, C.
2010-11-01
We present a systematic study of the existence and stability of discrete breathers that are spatially localized in the bulk of a one-dimensional chain of compressed elastic beads that interact via Hertzian contact. The chain is diatomic, consisting of a periodic arrangement of heavy and light spherical particles. We examine two families of discrete gap breathers: (1) an unstable discrete gap breather that is centered on a heavy particle and characterized by a symmetric spatial energy profile and (2) a potentially stable discrete gap breather that is centered on a light particle and is characterized by an asymmetric spatial energy profile. We investigate their existence, structure, and stability throughout the band gap of the linear spectrum and classify them into four regimes: a regime near the lower optical band edge of the linear spectrum, a moderately discrete regime, a strongly discrete regime that lies deep within the band gap of the linearized version of the system, and a regime near the upper acoustic band edge. We contrast discrete breathers in anharmonic Fermi-Pasta-Ulam (FPU)-type diatomic chains with those in diatomic granular crystals, which have a tensionless interaction potential between adjacent particles, and note that the asymmetric nature of the tensionless interaction potential can lead to hybrid bulk-surface localized solutions.
2014-01-01
vehicles/structures; in the work of Bergeron et al. (2002), an instrumented ballistic pendulum was utilized to investigate mine detonation-induced...element/ discrete-particle computational analysis in order to investigate potential benefits and drawbacks associated with material substitution...investigate potential benefits and drawbacks associated with material substitution (from steel to composite) in military-vehicle hull-floors whose primary
Actinide migration in Johnston Atoll soil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, S. F.; Bates, J. K.; Buck, E. C.
1997-02-01
Characterization of the actinide content of a sample of contaminated coral soil from Johnston Atoll, the site of three non-nuclear destructs of nuclear warhead-carrying THOR missiles in 1962, revealed that >99% of the total actinide content is associated with discrete bomb fragments. After removal of these fragments, there was an inverse correlation between actinide content and soil particle size in particles from 43 to 0.4 {micro}m diameter. Detailed analyses of this remaining soil revealed no discrete actinide phase in these soil particles, despite measurable actinide content. Observations indicate that exposure to the environment has caused the conversion of relatively insolublemore » actinide oxides to the more soluble actinyl oxides and actinyl carbonate coordinated complexes. This process has led to dissolution of actinides from discrete particles and migration to the surrounding soil surfaces, resulting in a dispersion greater than would be expected by physical transport of discrete particles alone.« less
Transport and discrete particle noise in gyrokinetic simulations
NASA Astrophysics Data System (ADS)
Jenkins, Thomas; Lee, W. W.
2006-10-01
We present results from our recent investigations regarding the effects of discrete particle noise on the long-time behavior and transport properties of gyrokinetic particle-in-cell simulations. It is found that the amplitude of nonlinearly saturated drift waves is unaffected by discreteness-induced noise in plasmas whose behavior is dominated by a single mode in the saturated state. We further show that the scaling of this noise amplitude with particle count is correctly predicted by the fluctuation-dissipation theorem, even though the drift waves have driven the plasma from thermal equilibrium. As well, we find that the long-term behavior of the saturated system is unaffected by discreteness-induced noise even when multiple modes are included. Additional work utilizing a code with both total-f and δf capabilities is also presented, as part of our efforts to better understand the long- time balance between entropy production, collisional dissipation, and particle/heat flux in gyrokinetic plasmas.
NASA Astrophysics Data System (ADS)
Cheng, Rongjun; Sun, Fengxin; Wei, Qi; Wang, Jufeng
2018-02-01
Space-fractional advection-dispersion equation (SFADE) can describe particle transport in a variety of fields more accurately than the classical models of integer-order derivative. Because of nonlocal property of integro-differential operator of space-fractional derivative, it is very challenging to deal with fractional model, and few have been reported in the literature. In this paper, a numerical analysis of the two-dimensional SFADE is carried out by the element-free Galerkin (EFG) method. The trial functions for the SFADE are constructed by the moving least-square (MLS) approximation. By the Galerkin weak form, the energy functional is formulated. Employing the energy functional minimization procedure, the final algebraic equations system is obtained. The Riemann-Liouville operator is discretized by the Grünwald formula. With center difference method, EFG method and Grünwald formula, the fully discrete approximation schemes for SFADE are established. Comparing with exact results and available results by other well-known methods, the computed approximate solutions are presented in the format of tables and graphs. The presented results demonstrate the validity, efficiency and accuracy of the proposed techniques. Furthermore, the error is computed and the proposed method has reasonable convergence rates in spatial and temporal discretizations.
New formulation of the discrete element method
NASA Astrophysics Data System (ADS)
Rojek, Jerzy; Zubelewicz, Aleksander; Madan, Nikhil; Nosewicz, Szymon
2018-01-01
A new original formulation of the discrete element method based on the soft contact approach is presented in this work. The standard DEM has heen enhanced by the introduction of the additional (global) deformation mode caused by the stresses in the particles induced by the contact forces. Uniform stresses and strains are assumed for each particle. The stresses are calculated from the contact forces. The strains are obtained using an inverse constitutive relationship. The strains allow us to obtain deformed particle shapes. The deformed shapes (ellipses) are taken into account in contact detection and evaluation of the contact forces. A simple example of a uniaxial compression of a rectangular specimen, discreti.zed with equal sized particles is simulated to verify the DDEM algorithm. The numerical example shows that a particle deformation changes the particle interaction and the distribution of forces in the discrete element assembly. A quantitative study of micro-macro elastic properties proves the enhanced capabilities of the DDEM as compared to standard DEM.
Electrolytic plating apparatus for discrete microsized particles
Mayer, Anton
1976-11-30
Method and apparatus are disclosed for electrolytically producing very uniform coatings of a desired material on discrete microsized particles. Agglomeration or bridging of the particles during the deposition process is prevented by imparting a sufficiently random motion to the particles that they are not in contact with a powered cathode for a time sufficient for such to occur.
Electroless plating apparatus for discrete microsized particles
Mayer, Anton
1978-01-01
Method and apparatus are disclosed for producing very uniform coatings of a desired material on discrete microsized particles by electroless techniques. Agglomeration or bridging of the particles during the deposition process is prevented by imparting a sufficiently random motion to the particles that they are not in contact with each other for a time sufficient for such to occur.
NASA Astrophysics Data System (ADS)
Furuichi, M.; Nishiura, D.
2015-12-01
Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our approach is suitable for solving the particles with different calculation costs (e.g. boundary particles) as well as the heterogeneous computer architecture. We analyze the parallel efficiency and scalability on the super computer systems (K-computer, Earth simulator 3, etc.).
Boundary particle method for Laplace transformed time fractional diffusion equations
NASA Astrophysics Data System (ADS)
Fu, Zhuo-Jia; Chen, Wen; Yang, Hai-Tian
2013-02-01
This paper develops a novel boundary meshless approach, Laplace transformed boundary particle method (LTBPM), for numerical modeling of time fractional diffusion equations. It implements Laplace transform technique to obtain the corresponding time-independent inhomogeneous equation in Laplace space and then employs a truly boundary-only meshless boundary particle method (BPM) to solve this Laplace-transformed problem. Unlike the other boundary discretization methods, the BPM does not require any inner nodes, since the recursive composite multiple reciprocity technique (RC-MRM) is used to convert the inhomogeneous problem into the higher-order homogeneous problem. Finally, the Stehfest numerical inverse Laplace transform (NILT) is implemented to retrieve the numerical solutions of time fractional diffusion equations from the corresponding BPM solutions. In comparison with finite difference discretization, the LTBPM introduces Laplace transform and Stehfest NILT algorithm to deal with time fractional derivative term, which evades costly convolution integral calculation in time fractional derivation approximation and avoids the effect of time step on numerical accuracy and stability. Consequently, it can effectively simulate long time-history fractional diffusion systems. Error analysis and numerical experiments demonstrate that the present LTBPM is highly accurate and computationally efficient for 2D and 3D time fractional diffusion equations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Gao, Xi; Li, Tingwen
For a long time, salt tracers have been used to measure the residence time distribution (RTD) of fluidized catalytic cracking (FCC) particles. However, due to limitations in experimental measurements and simulation methods, the ability of salt tracers to faithfully represent RTDs has never been directly investigated. Our current simulation results using coarse-grained computational fluid dynamic coupled with discrete element method (CFD-DEM) with filtered drag models show that the residence time of salt tracers with the same terminal velocity as FCC particles is slightly larger than that of FCC particles. This research also demonstrates the ability of filtered drag models tomore » predict the correct RTD curve for FCC particles while the homogeneous drag model may only be used in the dilute riser flow of Geldart type B particles. The RTD of large-scale reactors can then be efficiently investigated with our proposed numerical method as well as by using the old-fashioned salt tracer technology.« less
On the micromechanics of slip events in sheared, fluid-saturated fault gouge
NASA Astrophysics Data System (ADS)
Dorostkar, Omid; Guyer, Robert A.; Johnson, Paul A.; Marone, Chris; Carmeliet, Jan
2017-06-01
We used a three-dimensional discrete element method coupled with computational fluid dynamics to study the poromechanical properties of dry and fluid-saturated granular fault gouge. The granular layer was sheared under dry conditions to establish a steady state condition of stick-slip dynamic failure, and then fluid was introduced to study its effect on subsequent failure events. The fluid-saturated case showed increased stick-slip recurrence time and larger slip events compared to the dry case. Particle motion induces fluid flow with local pressure variation, which in turn leads to high particle kinetic energy during slip due to increased drag forces from fluid on particles. The presence of fluid during the stick phase of loading promotes a more stable configuration evidenced by higher particle coordination number. Our coupled fluid-particle simulations provide grain-scale information that improves understanding of slip instabilities and illuminates details of phenomenological, macroscale observations.
The persistent cosmic web and its filamentary structure - I. Theory and implementation
NASA Astrophysics Data System (ADS)
Sousbie, T.
2011-06-01
We present DisPerSE, a novel approach to the coherent multiscale identification of all types of astrophysical structures, in particular the filaments, in the large-scale distribution of the matter in the Universe. This method and the corresponding piece of software allows for a genuinely scale-free and parameter-free identification of the voids, walls, filaments, clusters and their configuration within the cosmic web, directly from the discrete distribution of particles in N-body simulations or galaxies in sparse observational catalogues. To achieve that goal, the method works directly over the Delaunay tessellation of the discrete sample and uses the Delaunay tessellation field estimator density computed at each tracer particle; no further sampling, smoothing or processing of the density field is required. The idea is based on recent advances in distinct subdomains of the computational topology, namely the discrete Morse theory which allows for a rigorous application of topological principles to astrophysical data sets, and the theory of persistence, which allows us to consistently account for the intrinsic uncertainty and Poisson noise within data sets. Practically, the user can define a given persistence level in terms of robustness with respect to noise (defined as a 'number of σ') and the algorithm returns the structures with the corresponding significance as sets of critical points, lines, surfaces and volumes corresponding to the clusters, filaments, walls and voids - filaments, connected at cluster nodes, crawling along the edges of walls bounding the voids. From a geometrical point of view, the method is also interesting as it allows for a robust quantification of the topological properties of a discrete distribution in terms of Betti numbers or Euler characteristics, without having to resort to smoothing or having to define a particular scale. In this paper, we introduce the necessary mathematical background and describe the method and implementation, while we address the application to 3D simulated and observed data sets in the companion paper (Sousbie, Pichon & Kawahara, Paper II).
Prediction of Fracture Behavior in Rock and Rock-like Materials Using Discrete Element Models
NASA Astrophysics Data System (ADS)
Katsaga, T.; Young, P.
2009-05-01
The study of fracture initiation and propagation in heterogeneous materials such as rock and rock-like materials are of principal interest in the field of rock mechanics and rock engineering. It is crucial to study and investigate failure prediction and safety measures in civil and mining structures. Our work offers a practical approach to predict fracture behaviour using discrete element models. In this approach, the microstructures of materials are presented through the combination of clusters of bonded particles with different inter-cluster particle and bond properties, and intra-cluster bond properties. The geometry of clusters is transferred from information available from thin sections, computed tomography (CT) images and other visual presentation of the modeled material using customized AutoCAD built-in dialog- based Visual Basic Application. Exact microstructures of the tested sample, including fractures, faults, inclusions and void spaces can be duplicated in the discrete element models. Although the microstructural fabrics of rocks and rock-like structures may have different scale, fracture formation and propagation through these materials are alike and will follow similar mechanics. Synthetic material provides an excellent condition for validating the modelling approaches, as fracture behaviours are known with the well-defined composite's properties. Calibration of the macro-properties of matrix material and inclusions (aggregates), were followed with the overall mechanical material responses calibration by adjusting the interfacial properties. The discrete element model predicted similar fracture propagation features and path as that of the real sample material. The path of the fractures and matrix-inclusion interaction was compared using computed tomography images. Initiation and fracture formation in the model and real material were compared using Acoustic Emission data. Analysing the temporal and spatial evolution of AE events, collected during the sample testing, in relation to the CT images allows the precise reconstruction of the failure sequence. Our proposed modelling approach illustrates realistic fracture formation and growth predictions at different loading conditions.
On the use of reverse Brownian motion to accelerate hybrid simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakarji, Joseph; Tartakovsky, Daniel M., E-mail: tartakovsky@stanford.edu
Multiscale and multiphysics simulations are two rapidly developing fields of scientific computing. Efficient coupling of continuum (deterministic or stochastic) constitutive solvers with their discrete (stochastic, particle-based) counterparts is a common challenge in both kinds of simulations. We focus on interfacial, tightly coupled simulations of diffusion that combine continuum and particle-based solvers. The latter employs the reverse Brownian motion (rBm), a Monte Carlo approach that allows one to enforce inhomogeneous Dirichlet, Neumann, or Robin boundary conditions and is trivially parallelizable. We discuss numerical approaches for improving the accuracy of rBm in the presence of inhomogeneous Neumann boundary conditions and alternative strategiesmore » for coupling the rBm solver with its continuum counterpart. Numerical experiments are used to investigate the convergence, stability, and computational efficiency of the proposed hybrid algorithm.« less
NASA Astrophysics Data System (ADS)
Ghasemi, A.; Borhani, S.; Viparelli, E.; Hill, K. M.
2017-12-01
The Exner equation provides a formal mathematical link between sediment transport and bed morphology. It is typically represented in a discrete formulation where there is a sharp geometric interface between the bedload layer and the bed, below which no particles are entrained. For high temporally and spatially resolved models, this is strictly correct, but typically this is applied in such a way that spatial and temporal fluctuations in the bed surface (bedforms and otherwise) are not captured. This limits the extent to which the exchange between particles in transport and the sediment bed are properly represented, particularly problematic for mixed grain size distributions that exhibit segregation. Nearly two decades ago, Parker (2000) provided a framework for a solution to this dilemma in the form of a probabilistic Exner equation, partially experimentally validated by Wong et al. (2007). We present a computational study designed to develop a physics-based framework for understanding the interplay between physical parameters of the bed and flow and parameters in the Parker (2000) probabilistic formulation. To do so we use Discrete Element Method simulations to relate local time-varying parameters to long-term macroscopic parameters. These include relating local grain size distribution and particle entrainment and deposition rates to long- average bed shear stress and the standard deviation of bed height variations. While relatively simple, these simulations reproduce long-accepted empirically determined transport behaviors such as the Meyer-Peter and Muller (1948) relationship. We also find that these simulations reproduce statistical relationships proposed by Wong et al. (2007) such as a Gaussian distribution of bed heights whose standard deviation increases with increasing bed shear stress. We demonstrate how the ensuing probabilistic formulations provide insight into the transport and deposition of both narrow and wide grain size distribution.
Vassal, J-P; Orgéas, L; Favier, D; Auriault, J-L; Le Corre, S
2008-01-01
Many analytical and numerical works have been devoted to the prediction of macroscopic effective transport properties in particulate media. Usually, structure and properties of macroscopic balance and constitutive equations are stated a priori. In this paper, the upscaling of the transient diffusion equations in concentrated particulate media with possible particle-particle interfacial barriers, highly conductive particles, poorly conductive matrix, and temperature-dependent physical properties is revisited using the homogenization method based on multiple scale asymptotic expansions. This method uses no a priori assumptions on the physics at the macroscale. For the considered physics and microstructures and depending on the order of magnitude of dimensionless Biot and Fourier numbers, it is shown that some situations cannot be homogenized. For other situations, three different macroscopic models are identified, depending on the quality of particle-particle contacts. They are one-phase media, following the standard heat equation and Fourier's law. Calculations of the effective conductivity tensor and heat capacity are proved to be uncoupled. Linear and steady state continuous localization problems must be solved on representative elementary volumes to compute the effective conductivity tensors for the two first models. For the third model, i.e., for highly resistive contacts, the localization problem becomes simpler and discrete whatever the shape of particles. In paper II [Vassal, Phys. Rev. E 77, 011303 (2008)], diffusion through networks of slender, wavy, entangled, and oriented fibers is considered. Discrete localization problems can then be obtained for all models, as well as semianalytical or fully analytical expressions of the corresponding effective conductivity tensors.
Pourmehran, Oveis; Gorji, Tahereh B; Gorji-Bandpy, Mofid
2016-10-01
Magnetic drug targeting (MDT) is a local drug delivery system which aims to concentrate a pharmacological agent at its site of action in order to minimize undesired side effects due to systemic distribution in the organism. Using magnetic drug particles under the influence of an external magnetic field, the drug particles are navigated toward the target region. Herein, computational fluid dynamics was used to simulate the air flow and magnetic particle deposition in a realistic human airway geometry obtained by CT scan images. Using discrete phase modeling and one-way coupling of particle-fluid phases, a Lagrangian approach for particle tracking in the presence of an external non-uniform magnetic field was applied. Polystyrene (PMS40) particles were utilized as the magnetic drug carrier. A parametric study was conducted, and the influence of particle diameter, magnetic source position, magnetic field strength and inhalation condition on the particle transport pattern and deposition efficiency (DE) was reported. Overall, the results show considerable promise of MDT in deposition enhancement at the target region (i.e., left lung). However, the positive effect of increasing particle size on DE enhancement was evident at smaller magnetic field strengths (Mn [Formula: see text] 1.5 T), whereas, at higher applied magnetic field strengths, increasing particle size has a inverse effect on DE. This implies that for efficient MTD in the human respiratory system, an optimal combination of magnetic drug career characteristics and magnetic field strength has to be achieved.
Localization in finite vibroimpact chains: Discrete breathers and multibreathers.
Grinberg, Itay; Gendelman, Oleg V
2016-09-01
We explore the dynamics of strongly localized periodic solutions (discrete solitons or discrete breathers) in a finite one-dimensional chain of oscillators. Localization patterns with both single and multiple localization sites (breathers and multibreathers) are considered. The model involves parabolic on-site potential with rigid constraints (the displacement domain of each particle is finite) and a linear nearest-neighbor coupling. When the particle approaches the constraint, it undergoes an inelastic impact according to Newton's impact model. The rigid nonideal impact constraints are the only source of nonlinearity and damping in the system. We demonstrate that this vibro-impact model allows derivation of exact analytic solutions for the breathers and multibreathers with an arbitrary set of localization sites, both in conservative and in forced-damped settings. Periodic boundary conditions are considered; exact solutions for other types of boundary conditions are also available. Local character of the nonlinearity permits explicit derivation of a monodromy matrix for the breather solutions. Consequently, the stability of the derived breather and multibreather solutions can be efficiently studied in the framework of simple methods of linear algebra, and with rather moderate computational efforts. One reveals that that the finiteness of the chain fragment and possible proximity of the localization sites strongly affect both the existence and the stability patterns of these localized solutions.
Kokeny, Paul; Cheng, Yu-Chung N; Xie, He
2018-05-01
Modeling MRI signal behaviors in the presence of discrete magnetic particles is important, as magnetic particles appear in nanoparticle labeled cells, contrast agents, and other biological forms of iron. Currently, many models that take into account the discrete particle nature in a system have been used to predict magnitude signal decays in the form of R2* or R2' from one single voxel. Little work has been done for predicting phase signals. In addition, most calculations of phase signals rely on the assumption that a system containing discrete particles behaves as a continuous medium. In this work, numerical simulations are used to investigate MRI magnitude and phase signals from discrete particles, without diffusion effects. Factors such as particle size, number density, susceptibility, volume fraction, particle arrangements for their randomness, and field of view have been considered in simulations. The results are compared to either a ground truth model, theoretical work based on continuous mediums, or previous literature. Suitable parameters used to model particles in several voxels that lead to acceptable magnetic field distributions around particle surfaces and accurate MR signals are identified. The phase values as a function of echo time from a central voxel filled by particles can be significantly different from those of a continuous cubic medium. However, a completely random distribution of particles can lead to an R2' value which agrees with the prediction from the static dephasing theory. A sphere with a radius of at least 4 grid points used in simulations is found to be acceptable to generate MR signals equivalent from a larger sphere. Increasing number of particles with a fixed volume fraction in simulations reduces the resulting variance in the phase behavior, and converges to almost the same phase value for different particle numbers at each echo time. The variance of phase values is also reduced when increasing the number of particles in a fixed voxel. These results indicate that MRI signals from voxels containing discrete particles, even with a sufficient number of particles per voxel, cannot be properly modeled by a continuous medium with an equivalent susceptibility value in the voxel. Copyright © 2017 Elsevier Inc. All rights reserved.
Phenomenological picture of fluctuations in branching random walks
NASA Astrophysics Data System (ADS)
Mueller, A. H.; Munier, S.
2014-10-01
We propose a picture of the fluctuations in branching random walks, which leads to predictions for the distribution of a random variable that characterizes the position of the bulk of the particles. We also interpret the 1 /√{t } correction to the average position of the rightmost particle of a branching random walk for large times t ≫1 , computed by Ebert and Van Saarloos, as fluctuations on top of the mean-field approximation of this process with a Brunet-Derrida cutoff at the tip that simulates discreteness. Our analytical formulas successfully compare to numerical simulations of a particular model of a branching random walk.
Simulation of deterministic energy-balance particle agglomeration in turbulent liquid-solid flows
NASA Astrophysics Data System (ADS)
Njobuenwu, Derrick O.; Fairweather, Michael
2017-08-01
An efficient technique to simulate turbulent particle-laden flow at high mass loadings within the four-way coupled simulation regime is presented. The technique implements large-eddy simulation, discrete particle simulation, a deterministic treatment of inter-particle collisions, and an energy-balanced particle agglomeration model. The algorithm to detect inter-particle collisions is such that the computational costs scale linearly with the number of particles present in the computational domain. On detection of a collision, particle agglomeration is tested based on the pre-collision kinetic energy, restitution coefficient, and van der Waals' interactions. The performance of the technique developed is tested by performing parametric studies on the influence of the restitution coefficient (en = 0.2, 0.4, 0.6, and 0.8), particle size (dp = 60, 120, 200, and 316 μm), Reynolds number (Reτ = 150, 300, and 590), and particle concentration (αp = 5.0 × 10-4, 1.0 × 10-3, and 5.0 × 10-3) on particle-particle interaction events (collision and agglomeration). The results demonstrate that the collision frequency shows a linear dependency on the restitution coefficient, while the agglomeration rate shows an inverse dependence. Collisions among smaller particles are more frequent and efficient in forming agglomerates than those of coarser particles. The particle-particle interaction events show a strong dependency on the shear Reynolds number Reτ, while increasing the particle concentration effectively enhances particle collision and agglomeration whilst having only a minor influence on the agglomeration rate. Overall, the sensitivity of the particle-particle interaction events to the selected simulation parameters is found to influence the population and distribution of the primary particles and agglomerates formed.
NASA Technical Reports Server (NTRS)
1978-01-01
The practicability of using a classical light-scattering technique, involving comparison of angular scattering intensity patterns with theoretically determined Mie and Rayleight patterns, to detect discrete soot particles (diameter less than 50 nm) in premixed propane/air and propane/oxygen-helium flames is considered. The experimental apparatus employed in this investigation included a laser light source, a flat-flame burner, specially coated optics, a cooled photomultiplier detector, and a lock-in voltmeter readout. Although large, agglomerated soot particles were detected and sized, it was not possible to detect small, discrete particles. The limiting factor appears to be background scattering by the system's optics.
Combining neural networks and signed particles to simulate quantum systems more efficiently
NASA Astrophysics Data System (ADS)
Sellier, Jean Michel
2018-04-01
Recently a new formulation of quantum mechanics has been suggested which describes systems by means of ensembles of classical particles provided with a sign. This novel approach mainly consists of two steps: the computation of the Wigner kernel, a multi-dimensional function describing the effects of the potential over the system, and the field-less evolution of the particles which eventually create new signed particles in the process. Although this method has proved to be extremely advantageous in terms of computational resources - as a matter of fact it is able to simulate in a time-dependent fashion many-body systems on relatively small machines - the Wigner kernel can represent the bottleneck of simulations of certain systems. Moreover, storing the kernel can be another issue as the amount of memory needed is cursed by the dimensionality of the system. In this work, we introduce a new technique which drastically reduces the computation time and memory requirement to simulate time-dependent quantum systems which is based on the use of an appropriately tailored neural network combined with the signed particle formalism. In particular, the suggested neural network is able to compute efficiently and reliably the Wigner kernel without any training as its entire set of weights and biases is specified by analytical formulas. As a consequence, the amount of memory for quantum simulations radically drops since the kernel does not need to be stored anymore as it is now computed by the neural network itself, only on the cells of the (discretized) phase-space which are occupied by particles. As its is clearly shown in the final part of this paper, not only this novel approach drastically reduces the computational time, it also remains accurate. The author believes this work opens the way towards effective design of quantum devices, with incredible practical implications.
DOUAR: A new three-dimensional creeping flow numerical model for the solution of geological problems
NASA Astrophysics Data System (ADS)
Braun, Jean; Thieulot, Cédric; Fullsack, Philippe; DeKool, Marthijn; Beaumont, Christopher; Huismans, Ritske
2008-12-01
We present a new finite element code for the solution of the Stokes and energy (or heat transport) equations that has been purposely designed to address crustal-scale to mantle-scale flow problems in three dimensions. Although it is based on an Eulerian description of deformation and flow, the code, which we named DOUAR ('Earth' in Breton language), has the ability to track interfaces and, in particular, the free surface, by using a dual representation based on a set of particles placed on the interface and the computation of a level set function on the nodes of the finite element grid, thus ensuring accuracy and efficiency. The code also makes use of a new method to compute the dynamic Delaunay triangulation connecting the particles based on non-Euclidian, curvilinear measure of distance, ensuring that the density of particles remains uniform and/or dynamically adapted to the curvature of the interface. The finite element discretization is based on a non-uniform, yet regular octree division of space within a unit cube that allows efficient adaptation of the finite element discretization, i.e. in regions of strong velocity gradient or high interface curvature. The finite elements are cubes (the leaves of the octree) in which a q1- p0 interpolation scheme is used. Nodal incompatibilities across faces separating elements of differing size are dealt with by introducing linear constraints among nodal degrees of freedom. Discontinuities in material properties across the interfaces are accommodated by the use of a novel method (which we called divFEM) to integrate the finite element equations in which the elemental volume is divided by a local octree to an appropriate depth (resolution). A variety of rheologies have been implemented including linear, non-linear and thermally activated creep and brittle (or plastic) frictional deformation. A simple smoothing operator has been defined to avoid checkerboard oscillations in pressure that tend to develop when using a highly irregular octree discretization and the tri-linear (or q1- p0) finite element. A three-dimensional cloud of particles is used to track material properties that depend on the integrated history of deformation (the integrated strain, for example); its density is variable and dynamically adapted to the computed flow. The large system of algebraic equations that results from the finite element discretization and linearization of the basic partial differential equations is solved using a multi-frontal massively parallel direct solver that can efficiently factorize poorly conditioned systems resulting from the highly non-linear rheology and the presence of the free surface. The code is almost entirely parallelized. We present example results including the onset of a Rayleigh-Taylor instability, the indentation of a rigid-plastic material and the formation of a fold beneath a free eroding surface, that demonstrate the accuracy, efficiency and appropriateness of the new code to solve complex geodynamical problems in three dimensions.
The discrete regime of flame propagation
NASA Astrophysics Data System (ADS)
Tang, Francois-David; Goroshin, Samuel; Higgins, Andrew
The propagation of laminar dust flames in iron dust clouds was studied in a low-gravity envi-ronment on-board a parabolic flight aircraft. The elimination of buoyancy-induced convection and particle settling permitted measurements of fundamental combustion parameters such as the burning velocity and the flame quenching distance over a wide range of particle sizes and in different gaseous mixtures. The discrete regime of flame propagation was observed by substitut-ing nitrogen present in air with xenon, an inert gas with a significantly lower heat conductivity. Flame propagation in the discrete regime is controlled by the heat transfer between neighbor-ing particles, rather than by the particle burning rate used by traditional continuum models of heterogeneous flames. The propagation mechanism of discrete flames depends on the spa-tial distribution of particles, and thus such flames are strongly influenced by local fluctuations in the fuel concentration. Constant pressure laminar dust flames were observed inside 70 cm long, 5 cm diameter Pyrex tubes. Equally-spaced plate assemblies forming rectangular chan-nels were placed inside each tube to determine the quenching distance defined as the minimum channel width through which a flame can successfully propagate. High-speed video cameras were used to measure the flame speed and a fiber optic spectrometer was used to measure the flame temperature. Experimental results were compared with predictions obtained from a numerical model of a three-dimensional flame developed to capture both the discrete nature and the random distribution of particles in the flame. Though good qualitative agreement was obtained between model predictions and experimental observations, residual g-jitters and the short reduced-gravity periods prevented further investigations of propagation limits in the dis-crete regime. The full exploration of the discrete flame phenomenon would require high-quality, long duration reduced gravity environment available only on orbital platforms.
Reduced discretization error in HZETRN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slaba, Tony C., E-mail: Tony.C.Slaba@nasa.gov; Blattnig, Steve R., E-mail: Steve.R.Blattnig@nasa.gov; Tweed, John, E-mail: jtweed@odu.edu
2013-02-01
The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure.more » In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.« less
Comparing Algorithms for Graph Isomorphism Using Discrete- and Continuous-Time Quantum Random Walks
Rudinger, Kenneth; Gamble, John King; Bach, Eric; ...
2013-07-01
Berry and Wang [Phys. Rev. A 83, 042317 (2011)] show numerically that a discrete-time quan- tum random walk of two noninteracting particles is able to distinguish some non-isomorphic strongly regular graphs from the same family. Here we analytically demonstrate how it is possible for these walks to distinguish such graphs, while continuous-time quantum walks of two noninteracting parti- cles cannot. We show analytically and numerically that even single-particle discrete-time quantum random walks can distinguish some strongly regular graphs, though not as many as two-particle noninteracting discrete-time walks. Additionally, we demonstrate how, given the same quantum random walk, subtle di erencesmore » in the graph certi cate construction algorithm can nontrivially im- pact the walk's distinguishing power. We also show that no continuous-time walk of a xed number of particles can distinguish all strongly regular graphs when used in conjunction with any of the graph certi cates we consider. We extend this constraint to discrete-time walks of xed numbers of noninteracting particles for one kind of graph certi cate; it remains an open question as to whether or not this constraint applies to the other graph certi cates we consider.« less
Variable Weight Fractional Collisions for Multiple Species Mixtures
2017-08-28
DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED; PA #17517 6 / 21 VARIABLE WEIGHTS FOR DYNAMIC RANGE Continuum to Discrete ...Representation: Many Particles →̃ Continuous Distribution Discretized VDF Yields Vlasov But Collision Integral Still a Problem Particle Methods VDF to Delta...Function Set Collisions between Discrete Velocities But Poorly Resolved Tail (Tail Critical to Inelastic Collisions) Variable Weights Permit Extra DOF in
Neoclassical simulation of tokamak plasmas using the continuum gyrokinetic code TEMPEST.
Xu, X Q
2008-07-01
We present gyrokinetic neoclassical simulations of tokamak plasmas with a self-consistent electric field using a fully nonlinear (full- f ) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five-dimensional computational grid in phase space. The present implementation is a method of lines approach where the phase-space derivatives are discretized with finite differences, and implicit backward differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving the gyrokinetic Poisson equation with self-consistent poloidal variation. With a four-dimensional (psi,theta,micro) version of the TEMPEST code, we compute the radial particle and heat fluxes, the geodesic-acoustic mode, and the development of the neoclassical electric field, which we compare with neoclassical theory using a Lorentz collision model. The present work provides a numerical scheme for self-consistently studying important dynamical aspects of neoclassical transport and electric field in toroidal magnetic fusion devices.
Neoclassical simulation of tokamak plasmas using the continuum gyrokinetic code TEMPEST
NASA Astrophysics Data System (ADS)
Xu, X. Q.
2008-07-01
We present gyrokinetic neoclassical simulations of tokamak plasmas with a self-consistent electric field using a fully nonlinear (full- f ) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five-dimensional computational grid in phase space. The present implementation is a method of lines approach where the phase-space derivatives are discretized with finite differences, and implicit backward differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving the gyrokinetic Poisson equation with self-consistent poloidal variation. With a four-dimensional (ψ,θ,γ,μ) version of the TEMPEST code, we compute the radial particle and heat fluxes, the geodesic-acoustic mode, and the development of the neoclassical electric field, which we compare with neoclassical theory using a Lorentz collision model. The present work provides a numerical scheme for self-consistently studying important dynamical aspects of neoclassical transport and electric field in toroidal magnetic fusion devices.
Physical and chemical characterization of actinides in soil from Johnston Atoll
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, S.F.; Bates, J.K.; Buck, E.C.
1997-02-01
Characterization of the actinide content of a sample of contaminated coral soil from Johnston Atoll, the site of three non-nuclear destructs of nuclear warhead-carrying THOR missiles in 1962, revealed that >99% of the total actinide content is associated with discrete bomb fragments. After removal of these fragments, there was an inverse correlation between actinide content and soil particle size in particles from 43 to 0.4 {mu}m diameter. Detailed analyses of this remaining soil revealed no discrete actinide phase in these soil particles, despite measurable actinide content. Observations indicate that exposure to the environment has caused the conversion of relatively insolublemore » actinide oxides to the more soluble actinyl oxides and actinyl carbonate coordinated complexes. This process has led to dissolution of actinides from discrete particles and migration to the surrounding soil surfaces, resulting in a dispersion greater than would be expected by physical transport of discrete particles alone. 26 refs., 4 figs., 1 tab.« less
Lu, Liqiang; Gao, Xi; Li, Tingwen; ...
2017-11-02
For a long time, salt tracers have been used to measure the residence time distribution (RTD) of fluidized catalytic cracking (FCC) particles. However, due to limitations in experimental measurements and simulation methods, the ability of salt tracers to faithfully represent RTDs has never been directly investigated. Our current simulation results using coarse-grained computational fluid dynamic coupled with discrete element method (CFD-DEM) with filtered drag models show that the residence time of salt tracers with the same terminal velocity as FCC particles is slightly larger than that of FCC particles. This research also demonstrates the ability of filtered drag models tomore » predict the correct RTD curve for FCC particles while the homogeneous drag model may only be used in the dilute riser flow of Geldart type B particles. The RTD of large-scale reactors can then be efficiently investigated with our proposed numerical method as well as by using the old-fashioned salt tracer technology.« less
Vassal, J-P; Orgéas, L; Favier, D; Auriault, J-L; Le Corre, S
2008-01-01
In paper I [Vassal, Phys. Rev. E77, 011302 (2008)] of this contribution, the effective diffusion properties of particulate media with highly conductive particles and particle-particle interfacial barriers have been investigated with the homogenization method with multiple scale asymptotic expansions. Three different macroscopic models have been proposed depending on the quality of contacts between particles. However, depending on the nature and the geometry of particles contained in representative elementary volumes of the considered media, localization problems to be solved to compute the effective conductivity of the two first models can rapidly become cumbersome, time and memory consuming. In this second paper, the above problem is simplified and applied to networks made of slender, wavy and entangled fibers. For these types of media, discrete formulations of localization problems for all macroscopic models can be obtained leading to very efficient numerical calculations. Semianalytical expressions of the effective conductivity tensors are also proposed under simplifying assumptions. The case of straight monodisperse and homogeneously distributed slender fibers with a circular cross section is further explored. Compact semianalytical and analytical estimations are obtained when fiber-fiber contacts are perfect or very poor. Moreover, two discrete element codes have been developed and used to solve localization problems on representative elementary volumes for the same types of contacts. Numerical results underline the significant roles of the fiber content, the orientation of fibers as well as the relative position and orientation of contacting fibers on the effective conductivity tensors. Semianalytical and analytical predictions are discussed and compared with numerical results.
NASA Astrophysics Data System (ADS)
Yan, Beichuan; Regueiro, Richard A.
2018-02-01
A three-dimensional (3D) DEM code for simulating complex-shaped granular particles is parallelized using message-passing interface (MPI). The concepts of link-block, ghost/border layer, and migration layer are put forward for design of the parallel algorithm, and theoretical scalability function of 3-D DEM scalability and memory usage is derived. Many performance-critical implementation details are managed optimally to achieve high performance and scalability, such as: minimizing communication overhead, maintaining dynamic load balance, handling particle migrations across block borders, transmitting C++ dynamic objects of particles between MPI processes efficiently, eliminating redundant contact information between adjacent MPI processes. The code executes on multiple US Department of Defense (DoD) supercomputers and tests up to 2048 compute nodes for simulating 10 million three-axis ellipsoidal particles. Performance analyses of the code including speedup, efficiency, scalability, and granularity across five orders of magnitude of simulation scale (number of particles) are provided, and they demonstrate high speedup and excellent scalability. It is also discovered that communication time is a decreasing function of the number of compute nodes in strong scaling measurements. The code's capability of simulating a large number of complex-shaped particles on modern supercomputers will be of value in both laboratory studies on micromechanical properties of granular materials and many realistic engineering applications involving granular materials.
Computational fluid dynamics (CFD) simulation of a newly designed passive particle sampler.
Sajjadi, H; Tavakoli, B; Ahmadi, G; Dhaniyala, S; Harner, T; Holsen, T M
2016-07-01
In this work a series of computational fluid dynamics (CFD) simulations were performed to predict the deposition of particles on a newly designed passive dry deposition (Pas-DD) sampler. The sampler uses a parallel plate design and a conventional polyurethane foam (PUF) disk as the deposition surface. The deposition of particles with sizes between 0.5 and 10 μm was investigated for two different geometries of the Pas-DD sampler for different wind speeds and various angles of attack. To evaluate the mean flow field, the k-ɛ turbulence model was used and turbulent fluctuating velocities were generated using the discrete random walk (DRW) model. The CFD software ANSYS-FLUENT was used for performing the numerical simulations. It was found that the deposition velocity increased with particle size or wind speed. The modeled deposition velocities were in general agreement with the experimental measurements and they increased when flow entered the sampler with a non-zero angle of attack. The particle-size dependent deposition velocity was also dependent on the geometry of the leading edge of the sampler; deposition velocities were more dependent on particle size and wind speeds for the sampler without the bend in the leading edge of the deposition plate, compared to a flat plate design. Foam roughness was also found to have a small impact on particle deposition. Copyright © 2016 Elsevier Ltd. All rights reserved.
A deterministic particle method for one-dimensional reaction-diffusion equations
NASA Technical Reports Server (NTRS)
Mascagni, Michael
1995-01-01
We derive a deterministic particle method for the solution of nonlinear reaction-diffusion equations in one spatial dimension. This deterministic method is an analog of a Monte Carlo method for the solution of these problems that has been previously investigated by the author. The deterministic method leads to the consideration of a system of ordinary differential equations for the positions of suitably defined particles. We then consider the time explicit and implicit methods for this system of ordinary differential equations and we study a Picard and Newton iteration for the solution of the implicit system. Next we solve numerically this system and study the discretization error both analytically and numerically. Numerical computation shows that this deterministic method is automatically adaptive to large gradients in the solution.
Bulbous head formation in bidisperse shallow granular flows over inclined planes
NASA Astrophysics Data System (ADS)
Denissen, I.; Thornton, A.; Weinhart, T.; Luding, S.
2017-12-01
Predicting the behaviour of hazardous natural granular flows (e.g. debris-flows and pyroclastic flows) is vital for an accurate assessment of the risks posed by such events. In these situations, an inversely graded vertical particle-size distribution develops, with larger particles on top of smaller particles. As the surface velocity of such flows is larger than the mean velocity, the larger material is then transported to the flow front. This creates a downstream size-segregation structure, resulting in a flow front composed purely of large particles, that are generally more frictional in geophysical flows. Thus, this segregation process reduces the mobility of the flow front, resulting in the formation of, a so-called, bulbous head. One of the main challenges of simulating these hazardous natural granular flows is the enormous number of particles they contain, which makes discrete particle simulations too computationally expensive to be practically useful. Continuum methods are able to simulate the bulk flow- and segregation behaviour of such flows, but have to make averaging approximations that reduce the huge number of degrees of freedom to a few continuum fields. Small-scale periodic discrete particle simulations can be used to determine the material parameters needed for the continuum model. In this presentation, we use a depth-averaged model to predict the flow profile for particulate chute flows, based on flow height, depth-averaged velocity and particle-size distribution [1], and show that the bulbous head structure naturally emerges from this model. The long-time behaviour of this solution of the depth-averaged continuum model converges to a novel travelling wave solution [2]. Furthermore, we validate this framework against computationally expensive 3D particle simulations, where we see surprisingly good agreement between both approaches, considering the approximations made in the continuum model. We conclude by showing that the travelling distance and height of a bidisperse granular avalanche can be well predicted by our continuum model. REFERENCES [1] M. J. Woodhouse, A. R. Thornton, C. G. Johnson, B. P. Kokelaar, J. M. N. T. Gray, J. Fluid Mech., 709, 543-580 (2012) [2] I.F.C. Denissen, T. Weinhart, A. Te Voortwis, S. Luding, J. M. N. T. Gray, A. R. Thornton, under review with J. Fluid Mech. (2017)
Efficient genetic algorithms using discretization scheduling.
McLay, Laura A; Goldberg, David E
2005-01-01
In many applications of genetic algorithms, there is a tradeoff between speed and accuracy in fitness evaluations when evaluations use numerical methods with varying discretization. In these types of applications, the cost and accuracy vary from discretization errors when implicit or explicit quadrature is used to estimate the function evaluations. This paper examines discretization scheduling, or how to vary the discretization within the genetic algorithm in order to use the least amount of computation time for a solution of a desired quality. The effectiveness of discretization scheduling can be determined by comparing its computation time to the computation time of a GA using a constant discretization. There are three ingredients for the discretization scheduling: population sizing, estimated time for each function evaluation and predicted convergence time analysis. Idealized one- and two-dimensional experiments and an inverse groundwater application illustrate the computational savings to be achieved from using discretization scheduling.
Genetic particle swarm parallel algorithm analysis of optimization arrangement on mistuned blades
NASA Astrophysics Data System (ADS)
Zhao, Tianyu; Yuan, Huiqun; Yang, Wenjun; Sun, Huagang
2017-12-01
This article introduces a method of mistuned parameter identification which consists of static frequency testing of blades, dichotomy and finite element analysis. A lumped parameter model of an engine bladed-disc system is then set up. A bladed arrangement optimization method, namely the genetic particle swarm optimization algorithm, is presented. It consists of a discrete particle swarm optimization and a genetic algorithm. From this, the local and global search ability is introduced. CUDA-based co-evolution particle swarm optimization, using a graphics processing unit, is presented and its performance is analysed. The results show that using optimization results can reduce the amplitude and localization of the forced vibration response of a bladed-disc system, while optimization based on the CUDA framework can improve the computing speed. This method could provide support for engineering applications in terms of effectiveness and efficiency.
Simulating incompressible flow on moving meshfree grids using General Finite Differences (GFD)
NASA Astrophysics Data System (ADS)
Vasyliv, Yaroslav; Alexeev, Alexander
2016-11-01
We simulate incompressible flow around an oscillating cylinder at different Reynolds numbers using General Finite Differences (GFD) on a meshfree grid. We evolve the meshfree grid by treating each grid node as a particle. To compute velocities and accelerations, we consider the particles at a particular instance as Eulerian observation points. The incompressible Navier-Stokes equations are directly discretized using GFD with boundary conditions enforced using a sharp interface treatment. Cloud sizes are set such that the local approximations use only 16 neighbors. To enforce incompressibility, we apply a semi-implicit approximate projection method. To prevent overlapping particles and formation of voids in the grid, we propose a particle regularization scheme based on a local minimization principle. We validate the GFD results for an oscillating cylinder against the lattice Boltzmann method and find good agreement. Financial support provided by National Science Foundation (NSF) Graduate Research Fellowship, Grant No. DGE-1148903.
On Efficient Multigrid Methods for Materials Processing Flows with Small Particles
NASA Technical Reports Server (NTRS)
Thomas, James (Technical Monitor); Diskin, Boris; Harik, VasylMichael
2004-01-01
Multiscale modeling of materials requires simulations of multiple levels of structural hierarchy. The computational efficiency of numerical methods becomes a critical factor for simulating large physical systems with highly desperate length scales. Multigrid methods are known for their superior efficiency in representing/resolving different levels of physical details. The efficiency is achieved by employing interactively different discretizations on different scales (grids). To assist optimization of manufacturing conditions for materials processing with numerous particles (e.g., dispersion of particles, controlling flow viscosity and clusters), a new multigrid algorithm has been developed for a case of multiscale modeling of flows with small particles that have various length scales. The optimal efficiency of the algorithm is crucial for accurate predictions of the effect of processing conditions (e.g., pressure and velocity gradients) on the local flow fields that control the formation of various microstructures or clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Wei; DeCroix, David; Sun, Xin
The attrition of particles is a major industrial concern in many fluidization systems as it can have undesired effects on the product quality and on the reliable operation of process equipment. Therefore, to accomodate the screening and selection of catalysts for a specific process in fluidized beds, risers, or cyclone applications, their attrition propensity is usually estimated through jet cup attrition testing, where the test material is subjected to high gas velocities in a jet cup. However, this method is far from perfect despite its popularity, largely due to its inconsistency in different testing set-ups. In order to better understandmore » the jet cup testing results as well as their sensitivity to different operating conditions, a coupled computational fluid dynamic (CFD) - discrete element method (DEM) model has been developed in the current study to investigate the particle attrition in a jet cup and its dependence on various factors, e.g. jet velocity, initial particle size, particle density, and apparatus geometry.« less
Numerical investigation of adhesion effects on solid particles filtration efficiency
NASA Astrophysics Data System (ADS)
Shaffee, Amira; Luckham, Paul; Matar, Omar K.
2017-11-01
Our work investigate the effectiveness of particle filtration process, in particular using a fully-coupled Computational Fluid Dynamics (CFD) and Discrete Element Method (DEM) approach involving poly-dispersed, adhesive solid particles. We found that an increase in particle adhesion reduces solid production through the opening of a wire-wrap type filter. Over time, as particle agglomerates continuously deposit on top of the filter, layer upon layer of particles is built on top of the filter, forming a particle pack. It is observed that with increasing particle adhesion, the pack height build up also increases and hence decreases the average particle volume fraction of the pack. This trend suggests higher porosity and looser packing of solid particles within the pack with increased adhesion. Furthermore, we found that the pressure drop for adhesive case is lower compared to non-adhesive case. Our results suggest agglomerating solid particles has beneficial effects on particle filtration. One important application of these findings is towards designing and optimizing sand control process for a hydrocarbon well with excessive sand production which is major challenge in oil and gas industry. Funding from PETRONAS and RAEng UK for Research Chair (OKM) gratefully acknowledged.
Fluidization of spherocylindrical particles
NASA Astrophysics Data System (ADS)
Mahajan, Vinay V.; Nijssen, Tim M. J.; Fitzgerald, Barry W.; Hofman, Jeroen; Kuipers, Hans; Padding, Johan T.
2017-06-01
Multiphase (gas-solid) flows are encountered in numerous industrial applications such as pharmaceutical, food, agricultural processing and energy generation. A coupled computational fluid dynamics (CFD) and discrete element method (DEM) approach is a popular way to study such flows at a particle scale. However, most of these studies deal with spherical particles while in reality, the particles are rarely spherical. The particle shape can have significant effect on hydrodynamics in a fluidized bed. Moreover, most studies in literature use inaccurate drag laws because accurate laws are not readily available. The drag force acting on a non-spherical particle can vary considerably with particle shape, orientation with the flow, Reynolds number and packing fraction. In this work, the CFD-DEM approach is extended to model a laboratory scale fluidized bed of spherocylinder (rod-like) particles. These rod-like particles can be classified as Geldart D particles and have an aspect ratio of 4. Experiments are performed to study the particle flow behavior in a quasi-2D fluidized bed. Numerically obtained results for pressure drop and bed height are compared with experiments. The capability of CFD-DEM approach to efficiently describe the global bed dynamics for fluidized bed of rod-like particles is demonstrated.
NASA Astrophysics Data System (ADS)
KIM, Jong Woon; LEE, Young-Ouk
2017-09-01
As computing power gets better and better, computer codes that use a deterministic method seem to be less useful than those using the Monte Carlo method. In addition, users do not like to think about space, angles, and energy discretization for deterministic codes. However, a deterministic method is still powerful in that we can obtain a solution of the flux throughout the problem, particularly as when particles can barely penetrate, such as in a deep penetration problem with small detection volumes. Recently, a new state-of-the-art discrete-ordinates code, ATTILA, was developed and has been widely used in several applications. ATTILA provides the capabilities to solve geometrically complex 3-D transport problems by using an unstructured tetrahedral mesh. Since 2009, we have been developing our own code by benchmarking ATTILA. AETIUS is a discrete ordinates code that uses an unstructured tetrahedral mesh such as ATTILA. For pre- and post- processing, Gmsh is used to generate an unstructured tetrahedral mesh by importing a CAD file (*.step) and visualizing the calculation results of AETIUS. Using a CAD tool, the geometry can be modeled very easily. In this paper, we describe a brief overview of AETIUS and provide numerical results from both AETIUS and a Monte Carlo code, MCNP5, in a deep penetration problem with small detection volumes. The results demonstrate the effectiveness and efficiency of AETIUS for such calculations.
NASA Astrophysics Data System (ADS)
Martinez, R.; Larouche, D.; Cailletaud, G.; Guillot, I.; Massinon, D.
2015-06-01
The precipitation of Al2Cu particles in a 319 T7 aluminum alloy has been modeled. A theoretical approach enables the concomitant computation of nucleation, growth and coarsening. The framework is based on an implicit scheme using the finite differences. The equation of continuity is discretized in time and space in order to obtain a matricial form. The inversion of a tridiagonal matrix gives way to determining the evolution of the size distribution of Al2Cu particles at t +Δt. The fluxes of in-between the boundaries are computed in order to respect the conservation of the mass of the system, as well as the fluxes at the boundaries. The essential results of the model are compared to TEM measurements. Simulations provide quantitative features on the impact of the cooling rate on the size distribution of particles. They also provide results in agreement with the TEM measurements. This kind of multiscale approach allows new perspectives to be examined in the process of designing highly loaded components such as cylinder heads. It enables a more precise prediction of the microstructure and its evolution as a function of continuous cooling rates.
NASA Astrophysics Data System (ADS)
Sellers, Michael S.; Lísal, Martin; Schweigert, Igor; Larentzos, James P.; Brennan, John K.
2017-01-01
In discrete particle simulations, when an atomistic model is coarse-grained, a tradeoff is made: a boost in computational speed for a reduction in accuracy. The Dissipative Particle Dynamics (DPD) methods help to recover lost accuracy of the viscous and thermal properties, while giving back a relatively small amount of computational speed. Since its initial development for polymers, one of the most notable extensions of DPD has been the introduction of chemical reactivity, called DPD-RX. In 2007, Maillet, Soulard, and Stoltz introduced implicit chemical reactivity in DPD through the concept of particle reactors and simulated the decomposition of liquid nitromethane. We present an extended and generalized version of the DPD-RX method, and have applied it to solid hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX). Demonstration simulations of reacting RDX are performed under shock conditions using a recently developed single-site coarse-grain model and a reduced RDX decomposition mechanism. A description of the methods used to simulate RDX and its transition to hot product gases within DPD-RX is presented. Additionally, we discuss several examples of the effect of shock speed and microstructure on the corresponding material chemistry.
Vectorization of a particle simulation method for hypersonic rarefied flow
NASA Technical Reports Server (NTRS)
Mcdonald, Jeffrey D.; Baganoff, Donald
1988-01-01
An efficient particle simulation technique for hypersonic rarefied flows is presented at an algorithmic and implementation level. The implementation is for a vector computer architecture, specifically the Cray-2. The method models an ideal diatomic Maxwell molecule with three translational and two rotational degrees of freedom. Algorithms are designed specifically for compatibility with fine grain parallelism by reducing the number of data dependencies in the computation. By insisting on this compatibility, the method is capable of performing simulation on a much larger scale than previously possible. A two-dimensional simulation of supersonic flow over a wedge is carried out for the near-continuum limit where the gas is in equilibrium and the ideal solution can be used as a check on the accuracy of the gas model employed in the method. Also, a three-dimensional, Mach 8, rarefied flow about a finite-span flat plate at a 45 degree angle of attack was simulated. It utilized over 10 to the 7th particles carried through 400 discrete time steps in less than one hour of Cray-2 CPU time. This problem was chosen to exhibit the capability of the method in handling a large number of particles and a true three-dimensional geometry.
Numerical simulation of filtration of mine water from coal slurry particles
NASA Astrophysics Data System (ADS)
Dyachenko, E. N.; Dyachenko, N. N.
2017-11-01
The discrete element method is applied to model a technology for clarification of industrial waste water containing fine-dispersed solid impurities. The process is analyzed at the level of discrete particles and pores. The effect of filter porosity on the volume fraction of particles has been shown. The degree of clarification of mine water was also calculated depending on the coal slurry particle size, taking into account the adhesion force.
Direct Simulation of Multiple Scattering by Discrete Random Media Illuminated by Gaussian Beams
NASA Technical Reports Server (NTRS)
Mackowski, Daniel W.; Mishchenko, Michael I.
2011-01-01
The conventional orientation-averaging procedure developed in the framework of the superposition T-matrix approach is generalized to include the case of illumination by a Gaussian beam (GB). The resulting computer code is parallelized and used to perform extensive numerically exact calculations of electromagnetic scattering by volumes of discrete random medium consisting of monodisperse spherical particles. The size parameters of the scattering volumes are 40, 50, and 60, while their packing density is fixed at 5%. We demonstrate that all scattering patterns observed in the far-field zone of a random multisphere target and their evolution with decreasing width of the incident GB can be interpreted in terms of idealized theoretical concepts such as forward-scattering interference, coherent backscattering (CB), and diffuse multiple scattering. It is shown that the increasing violation of electromagnetic reciprocity with decreasing GB width suppresses and eventually eradicates all observable manifestations of CB. This result supplements the previous demonstration of the effects of broken reciprocity in the case of magneto-optically active particles subjected to an external magnetic field.
Faster and More Accurate Transport Procedures for HZETRN
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Blattnig, Steve R.; Badavi, Francis F.
2010-01-01
Several aspects of code verification are examined for HZETRN. First, a detailed derivation of the numerical marching algorithms is given. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of various coding errors is also given, and the impact of these errors on exposure quantities is shown. Finally, a coupled convergence study is conducted. From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is also determined that almost all of the discretization error in HZETRN is caused by charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons are given for three applications in which HZETRN is commonly used. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.
Modeling Spectra of Icy Satellites and Cometary Icy Particles Using Multi-Sphere T-Matrix Code
NASA Astrophysics Data System (ADS)
Kolokolova, Ludmilla; Mackowski, Daniel; Pitman, Karly M.; Joseph, Emily C. S.; Buratti, Bonnie J.; Protopapa, Silvia; Kelley, Michael S.
2016-10-01
The Multi-Sphere T-matrix code (MSTM) allows rigorous computations of characteristics of the light scattered by a cluster of spherical particles. It was introduced to the scientific community in 1996 (Mackowski & Mishchenko, 1996, JOSA A, 13, 2266). Later it was put online and became one of the most popular codes to study photopolarimetric properties of aggregated particles. Later versions of this code, especially its parallelized version MSTM3 (Mackowski & Mishchenko, 2011, JQSRT, 112, 2182), were used to compute angular and wavelength dependence of the intensity and polarization of light scattered by aggregates of up to 4000 constituent particles (Kolokolova & Mackowski, 2012, JQSRT, 113, 2567). The version MSTM4 considers large thick slabs of spheres (Mackowski, 2014, Proc. of the Workshop ``Scattering by aggregates``, Bremen, Germany, March 2014, Th. Wriedt & Yu. Eremin, Eds., 6) and is significantly different from the earlier versions. It adopts a Discrete Fourier Convolution, implemented using a Fast Fourier Transform, for evaluation of the exciting field. MSTM4 is able to treat dozens of thousands of spheres and is about 100 times faster than the MSTM3 code. This allows us not only to compute the light scattering properties of a large number of electromagnetically interacting constituent particles, but also to perform multi-wavelength and multi-angular computations using computer resources with rather reasonable CPU and computer memory. We used MSTM4 to model near-infrared spectra of icy satellites of Saturn (Rhea, Dione, and Tethys data from Cassini VIMS), and of icy particles observed in the coma of comet 103P/Hartley 2 (data from EPOXI/DI HRII). Results of our modeling show that in the case of icy satellites the best fit to the observed spectra is provided by regolith made of spheres of radius ~1 micron with a porosity in the range 85% - 95%, which slightly varies for the different satellites. Fitting the spectra of the cometary icy particles requires icy aggregates of size larger than 40 micron with constituent spheres in the micron size range.
Faster and more accurate transport procedures for HZETRN
NASA Astrophysics Data System (ADS)
Slaba, T. C.; Blattnig, S. R.; Badavi, F. F.
2010-12-01
The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle ( A ⩽ 4) and heavy ion ( A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete description of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm 2 in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm 2 of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.
Faster and more accurate transport procedures for HZETRN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slaba, T.C., E-mail: Tony.C.Slaba@nasa.go; Blattnig, S.R., E-mail: Steve.R.Blattnig@nasa.go; Badavi, F.F., E-mail: Francis.F.Badavi@nasa.go
The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle (A {<=} 4) and heavy ion (A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete descriptionmore » of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm{sup 2} in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm{sup 2} of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.« less
Inertial particle dynamics in large artery flows - Implications for modeling arterial embolisms.
Mukherjee, Debanjan; Shadden, Shawn C
2017-02-08
The complexity of inertial particle dynamics through swirling chaotic flow structures characteristic of pulsatile large-artery hemodynamics renders significant challenges in predictive understanding of transport of such particles. This is specifically crucial for arterial embolisms, where knowledge of embolus transport to major vascular beds helps in disease diagnosis and surgical planning. Using a computational framework built upon image-based CFD and discrete particle dynamics modeling, a multi-parameter sampling-based study was conducted on embolic particle dynamics and transport. The results highlighted the strong influence of material properties, embolus size, release instance, and embolus source on embolus distribution to the cerebral, renal and mesenteric, and ilio-femoral vasculature beds. The study also isolated the importance of shear-gradient lift, and elastohydrodynamic contact, in affecting embolic particle transport. Near-wall particle re-suspension due to lift alters aortogenic embolic particle dynamics significantly as compared to cardiogenic. The observations collectively indicated the complex interplay of particle inertia, fluid-particle density ratio, and wall collisions, with chaotic flow structures, which render the overall motion of the particles to be non-trivially dispersive in nature. Copyright © 2017 Elsevier Ltd. All rights reserved.
Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Jianyuan; Qin, Hong; Liu, Jian
2015-11-01
Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint arXiv: 1505.06076 (2015)], which produces fivemore » exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave. (C) 2015 AIP Publishing LLC.« less
Yang, Jin; Liu, Fagui; Cao, Jianneng; Wang, Liangming
2016-07-14
Mobile sinks can achieve load-balancing and energy-consumption balancing across the wireless sensor networks (WSNs). However, the frequent change of the paths between source nodes and the sinks caused by sink mobility introduces significant overhead in terms of energy and packet delays. To enhance network performance of WSNs with mobile sinks (MWSNs), we present an efficient routing strategy, which is formulated as an optimization problem and employs the particle swarm optimization algorithm (PSO) to build the optimal routing paths. However, the conventional PSO is insufficient to solve discrete routing optimization problems. Therefore, a novel greedy discrete particle swarm optimization with memory (GMDPSO) is put forward to address this problem. In the GMDPSO, particle's position and velocity of traditional PSO are redefined under discrete MWSNs scenario. Particle updating rule is also reconsidered based on the subnetwork topology of MWSNs. Besides, by improving the greedy forwarding routing, a greedy search strategy is designed to drive particles to find a better position quickly. Furthermore, searching history is memorized to accelerate convergence. Simulation results demonstrate that our new protocol significantly improves the robustness and adapts to rapid topological changes with multiple mobile sinks, while efficiently reducing the communication overhead and the energy consumption.
Nonlinear Analysis of Two-phase Circumferential Motion in the Ablation Circumstance
NASA Astrophysics Data System (ADS)
Xiao-liang, Xu; Hai-ming, Huang; Zi-mao, Zhang
2010-05-01
In aerospace craft reentry and solid rocket propellant nozzle, thermal chemistry ablation is a complex process coupling with convection, heat transfer, mass transfer and chemical reaction. Based on discrete vortex method (DVM), thermal chemical ablation model and particle kinetic model, a computational module dealing with the two-phase circumferential motion in ablation circumstance is designed, the ablation velocity and circumferential field can be thus calculated. The calculated nonlinear time series are analyzed in chaotic identification method: relative chaotic characters such as correlation dimension and the maximum Lyapunov exponent are calculated, fractal dimension of vortex bulbs and particles distributions are also obtained, thus the nonlinear ablation process can be judged as a spatiotemporal chaotic process.
The behavior of a macroscopic granular material in vortex flow
NASA Astrophysics Data System (ADS)
Nishikawa, Asami
A granular material is defined as a collection of discrete particles such as powder and grain. Granular materials display a large number of complex behaviors. In this project, the behavior of macroscopic granular materials under tornado-like vortex airflow, with varying airflow velocity, was observed and studied. The experimental system was composed of a 9.20-cm inner diameter acrylic pipe with a metal mesh bottom holding the particles, a PVC duct, and an airflow source controlled by a variable auto-transformer, and a power-meter. A fixed fan blade was attached to the duct's inner wall to create a tornado-like vortex airflow from straight flow. As the airflow velocity was increased gradually, the behavior of a set of same-diameter granular materials was observed. The observed behaviors were classified into six phases based on the macroscopic mechanical dynamics. Through this project, we gained insights on the significant parameters for a computer simulation of a similar system by Heath Rice [5]. Comparing computationally and experimentally observed phase diagrams, we can see similar structure. The experimental observations showed the effect of initial arrangement of particles on the phase transitions.
Experimental and numerical characterization of expanded glass granules
NASA Astrophysics Data System (ADS)
Chaudry, Mohsin Ali; Woitzik, Christian; Düster, Alexander; Wriggers, Peter
2018-07-01
In this paper, the material response of expanded glass granules at different scales and under different boundary conditions is investigated. At grain scale, single particle tests can be used to determine properties like Young's modulus or crushing strength. With experiments like triaxial and oedometer tests, it is possible to examine the bulk mechanical behaviour of the granular material. Our experimental investigation is complemented by a numerical simulation where the discrete element method is used to compute the mechanical behaviour of such materials. In order to improve the simulation quality, effects such as rolling resistance, inelastic behaviour, damage, and crushing are also included in the discrete element method. Furthermore, the variation of the material properties of granules is modelled by a statistical distribution and included in our numerical simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Jianyuan; Liu, Jian; He, Yang
Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint http://arxiv.org/abs/arXiv:1505.06076 (2015)], which produces five exactlymore » soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave.« less
Discrimination between discrete and continuum scattering from the sub-seafloor.
Holland, Charles W; Steininger, Gavin; Dosso, Stan E
2015-08-01
There is growing evidence that seabed scattering is often dominated by heterogeneities within the sediment volume as opposed to seafloor roughness. From a theoretical viewpoint, sediment volume heterogeneities can be described either by a fluctuation continuum or by discrete particles. In at-sea experiments, heterogeneity characteristics generally are not known a priori. Thus, an uninformed model selection is generally made, i.e., the researcher must arbitrarily select either a discrete or continuum model. It is shown here that it is possible to (acoustically) discriminate between continuum and discrete heterogeneities in some instances. For example, when the spectral exponent γ3>4, the volume scattering cannot be described by discrete particles. Conversely, when γ3≤2, the heterogeneities likely arise from discrete particles. Furthermore, in the range 2<γ3≤4 it is sometimes possible to discriminate via physical bounds on the parameter values. The ability to so discriminate is important, because there are few tools for measuring small scale, O(10(-2) to 10(1)) m, sediment heterogeneities over large areas. Therefore, discriminating discrete vs continuum heterogeneities via acoustic remote sensing may lead to improved observations and concomitant increased understanding of the marine benthic environment.
NASA Astrophysics Data System (ADS)
Haspel, C.; Adler, G.
2017-04-01
In the current study, the electromagnetic properties of porous aerosol particles are calculated in two ways. In the first, a porous target input file is generated by carving out voids in an otherwise homogeneous particle, and the discrete dipole approximation (DDA) is used to compute the extinction efficiency of the particle assuming that the voids are near vacuum dielectrics and assuming random particle orientation. In the second, an effective medium approximation (EMA) style approach is employed in which an apparent polarizability of the voids is defined based on the well-known solution to the problem in classical electrostatics of a spherical cavity within a dielectric. It is found that for porous particles with smaller overall diameter with respect to the wavelength of incident radiation, describing the voids as near vacuum dielectrics within the DDA sufficiently reproduces measured values of extinction efficiency, whereas for porous particles with moderate to larger overall diameters with respect to the wavelength of the radiation, the apparent polarizability EMA approach better reproduces the measured values of extinction efficiency.
NASA Astrophysics Data System (ADS)
Soobiah, Y. I. J.; Espley, J. R.; Connerney, J. E. P.; Gruesbeck, J.; DiBraccio, G. A.; Schneider, N.; Jain, S.; Brain, D.; Andersson, L.; Halekas, J. S.; Lillis, R. J.; McFadden, J. P.; Mitchell, D. L.; Mazelle, C. X.; Deighan, J.; McClintock, W. E.; Ergun, R.; Jakosky, B. M.
2016-12-01
NASA's Mars Atmosphere and Volatile EvolutioN (MAVEN) spacecraft has observed a variety of aurora at Mars and related processes that impact the escape of the Martian atmosphere. So far MAVEN's Imaging Ultraviolet Spectrograph (IUVS) instrument has observed 1) Diffuse aurora over widespread regions of Mars' northern hemisphere; 2) Discrete aurora that is spatially confined to localized patches around regions of crustal magnetic field; and 3) Proton aurora from the limb brightening of Lyman-α emission. MAVEN's Solar Energetic Particle (SEP) instrument has shown the diffuse aurora to be coincident with outbursts of solar energetic particles and disturbed solar wind and magnetospheric conditions. MAVEN Particle and Fields Package (PFP) Solar Wind Ion Analyzer (SWIA) has shown the limb brightening of Lyman-α to correlate with increased upstream solar wind dynamic pressure as associated with increased penetrating protons. So far a conclusive explanation for the discrete aurora has yet to be determined. This study aims to explore the plasma processes related to discrete Martian aurora in greater detail by presenting an overview of PFP measurements during orbits when IUVS observed discrete aurora at Mars. Initial observations from orbit 1600 of MAVEN has shown the almost side-by-side occurrence of a crustal magnetic field associated current sheet measured by MAVEN's Magnetometer Investigation (MAG) near the Mars terminator and IUVS limb observations of discrete aurora in Mars shadow (similar co-latitudes but separated by nearly 1800 km across longitude). This study includes further analysis of magnetic field current sheets and the particle acceleration/energization to investigate the space plasma processes involved in discrete aurora at Mars.
Strong dynamics and lattice gauge theory
NASA Astrophysics Data System (ADS)
Schaich, David
In this dissertation I use lattice gauge theory to study models of electroweak symmetry breaking that involve new strong dynamics. Electroweak symmetry breaking (EWSB) is the process by which elementary particles acquire mass. First proposed in the 1960s, this process has been clearly established by experiments, and can now be considered a law of nature. However, the physics underlying EWSB is still unknown, and understanding it remains a central challenge in particle physics today. A natural possibility is that EWSB is driven by the dynamics of some new, strongly-interacting force. Strong interactions invalidate the standard analytical approach of perturbation theory, making these models difficult to study. Lattice gauge theory is the premier method for obtaining quantitatively-reliable, nonperturbative predictions from strongly-interacting theories. In this approach, we replace spacetime by a regular, finite grid of discrete sites connected by links. The fields and interactions described by the theory are likewise discretized, and defined on the lattice so that we recover the original theory in continuous spacetime on an infinitely large lattice with sites infinitesimally close together. The finite number of degrees of freedom in the discretized system lets us simulate the lattice theory using high-performance computing. Lattice gauge theory has long been applied to quantum chromodynamics, the theory of strong nuclear interactions. Using lattice gauge theory to study dynamical EWSB, as I do in this dissertation, is a new and exciting application of these methods. Of particular interest is non-perturbative lattice calculation of the electroweak S parameter. Experimentally S ≈ -0.15(10), which tightly constrains dynamical EWSB. On the lattice, I extract S from the momentum-dependence of vector and axial-vector current correlators. I created and applied computer programs to calculate these correlators and analyze them to determine S. I also calculated the masses and other properties of the new particles predicted by these theories. I find S ≳ 0.1 in the specific theories I study. Although this result still disagrees with experiment, it is much closer to the experimental value than is the conventional wisdom S ≳ 0.3. These results encourage further lattice studies to search for experimentally viable strongly-interacting theories of EWSB.
CDM: Teaching Discrete Mathematics to Computer Science Majors
ERIC Educational Resources Information Center
Sutner, Klaus
2005-01-01
CDM, for computational discrete mathematics, is a course that attempts to teach a number of topics in discrete mathematics to computer science majors. The course abandons the classical definition-theorem-proof model, and instead relies heavily on computation as a source of motivation and also for experimentation and illustration. The emphasis on…
A modified Stern-Gerlach experiment using a quantum two-state magnetic field
NASA Astrophysics Data System (ADS)
Daghigh, Ramin G.; Green, Michael D.; West, Christopher J.
2018-06-01
The Stern-Gerlach experiment has played an important role in our understanding of quantum behavior. We propose and analyze a modified version of this experiment where the magnetic field of the detector is in a quantum superposition, which may be experimentally realized using a superconducting flux qubit. We show that if incident spin-1/2 particles couple with the two-state magnetic field, a discrete target distribution results that resembles the distribution in the classical Stern-Gerlach experiment. As an application of the general result, we compute the distribution for a Gaussian waveform of the incident fermion. This analysis allows us to demonstrate theoretically: (1) the quantization of the intrinsic angular momentum of a spin-1/2 particle, and (2) a correlation between EPR pairs leading to nonlocality, without necessarily collapsing the particle's spin wavefunction.
2010-12-13
SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 41 19a. NAME OF RESPONSIBLE PERSON a. REPORT...characterized by shear bands with an inclination of about 45◦. Finally, in the case of compresion /tension, the failure mode transitions from shear band to...computed in the average sense through the volume variation of the specimens. It must be noted that the relevant experimental results are actually
Analysis of Gas-Particle Flows through Multi-Scale Simulations
NASA Astrophysics Data System (ADS)
Gu, Yile
Multi-scale structures are inherent in gas-solid flows, which render the modeling efforts challenging. On one hand, detailed simulations where the fine structures are resolved and particle properties can be directly specified can account for complex flow behaviors, but they are too computationally expensive to apply for larger systems. On the other hand, coarse-grained simulations demand much less computations but they necessitate constitutive models which are often not readily available for given particle properties. The present study focuses on addressing this issue, as it seeks to provide a general framework through which one can obtain the required constitutive models from detailed simulations. To demonstrate the viability of this general framework in which closures can be proposed for different particle properties, we focus on the van der Waals force of interaction between particles. We start with Computational Fluid Dynamics (CFD) - Discrete Element Method (DEM) simulations where the fine structures are resolved and van der Waals force between particles can be directly specified, and obtain closures for stress and drag that are required for coarse-grained simulations. Specifically, we develop a new cohesion model that appropriately accounts for van der Waals force between particles to be used for CFD-DEM simulations. We then validate this cohesion model and the CFD-DEM approach by showing that it can qualitatively capture experimental results where the addition of small particles to gas fluidization reduces bubble sizes. Based on the DEM and CFD-DEM simulation results, we propose stress models that account for the van der Waals force between particles. Finally, we apply machine learning, specifically neural networks, to obtain a drag model that captures the effects from fine structures and inter-particle cohesion. We show that this novel approach using neural networks, which can be readily applied for other closures other than drag here, can take advantage of the large amount of data generated from simulations, and therefore offer superior modeling performance over traditional approaches.
Discrete particle swarm optimization for identifying community structures in signed social networks.
Cai, Qing; Gong, Maoguo; Shen, Bo; Ma, Lijia; Jiao, Licheng
2014-10-01
Modern science of networks has facilitated us with enormous convenience to the understanding of complex systems. Community structure is believed to be one of the notable features of complex networks representing real complicated systems. Very often, uncovering community structures in networks can be regarded as an optimization problem, thus, many evolutionary algorithms based approaches have been put forward. Particle swarm optimization (PSO) is an artificial intelligent algorithm originated from social behavior such as birds flocking and fish schooling. PSO has been proved to be an effective optimization technique. However, PSO was originally designed for continuous optimization which confounds its applications to discrete contexts. In this paper, a novel discrete PSO algorithm is suggested for identifying community structures in signed networks. In the suggested method, particles' status has been redesigned in discrete form so as to make PSO proper for discrete scenarios, and particles' updating rules have been reformulated by making use of the topology of the signed network. Extensive experiments compared with three state-of-the-art approaches on both synthetic and real-world signed networks demonstrate that the proposed method is effective and promising. Copyright © 2014 Elsevier Ltd. All rights reserved.
On the dynamic rounding-off in analogue and RF optimal circuit sizing
NASA Astrophysics Data System (ADS)
Kotti, Mouna; Fakhfakh, Mourad; Fino, Maria Helena
2014-04-01
Frequently used approaches to solve discrete multivariable optimisation problems consist of computing solutions using a continuous optimisation technique. Then, using heuristics, the variables are rounded-off to their nearest available discrete values to obtain a discrete solution. Indeed, in many engineering problems, and particularly in analogue circuit design, component values, such as the geometric dimensions of the transistors, the number of fingers in an integrated capacitor or the number of turns in an integrated inductor, cannot be chosen arbitrarily since they have to obey to some technology sizing constraints. However, rounding-off the variables values a posteriori and can lead to infeasible solutions (solutions that are located too close to the feasible solution frontier) or degradation of the obtained results (expulsion from the neighbourhood of a 'sharp' optimum) depending on how the added perturbation affects the solution. Discrete optimisation techniques, such as the dynamic rounding-off technique (DRO) are, therefore, needed to overcome the previously mentioned situation. In this paper, we deal with an improvement of the DRO technique. We propose a particle swarm optimisation (PSO)-based DRO technique, and we show, via some analog and RF-examples, the necessity to implement such a routine into continuous optimisation algorithms.
Discreteness effects in a reacting system of particles with finite interaction radius.
Berti, S; López, C; Vergni, D; Vulpiani, A
2007-09-01
An autocatalytic reacting system with particles interacting at a finite distance is studied. We investigate the effects of the discrete-particle character of the model on properties like reaction rate, quenching phenomenon, and front propagation, focusing on differences with respect to the continuous case. We introduce a renormalized reaction rate depending both on the interaction radius and the particle density, and we relate it to macroscopic observables (e.g., front speed and front thickness) of the system.
NASA Astrophysics Data System (ADS)
Lu, Zheng; Lu, Xilin; Lu, Wensheng; Masri, Sami F.
2012-04-01
This paper presents a systematic experimental investigation of the effects of buffered particle dampers attached to a multi-degree-of-freedom (mdof) system under different dynamic loads (free vibration, random excitation as well as real onsite earthquake excitations), and analytical/computational study of such a system. A series of shaking table tests of a three-storey steel frame with the buffered particle damper system are carried out to evaluate the performance and to verify the analysis method. It is shown that buffered particle dampers have good performance in reducing the response of structures under dynamic loads, especially under random excitation case. It can effectively control the fundamental mode of the mdof primary system; however, the control effect for higher modes is variable. It is also shown that, for a specific container geometry, a certain mass ratio leads to more efficient momentum transfer from the primary system to the particles with a better vibration attenuation effect, and that buffered particle dampers have better control effect than the conventional rigid ones. An analytical solution based on the discrete element method is also presented. Comparison between the experimental and computational results shows that reasonably accurate estimates of the response of a primary system can be obtained. Properly designed buffered particle dampers can effectively reduce the response of lightly damped mdof primary system with a small weight penalty, under different dynamic loads.
Fermion Systems in Discrete Space-Time Exemplifying the Spontaneous Generation of a Causal Structure
NASA Astrophysics Data System (ADS)
Diethert, A.; Finster, F.; Schiefeneder, D.
As toy models for space-time at the Planck scale, we consider examples of fermion systems in discrete space-time which are composed of one or two particles defined on two up to nine space-time points. We study the self-organization of the particles as described by a variational principle both analytically and numerically. We find an effect of spontaneous symmetry breaking which leads to the emergence of a discrete causal structure.
NASA Technical Reports Server (NTRS)
Paul, J. T., Jr.; Buntin, G. A.
1982-01-01
Graphite (or carbon) fiber composite impact strength improvement was attempted by modifying the fiber surface. Elastomeric particles were made into lattices and deposited ionically on surface treated graphite fiber in an attempt to prepare a surface containing discrete rubber particles. With hard, nonelastomeric polystyrene discrete particle coverage was achieved. All the elastomeric containing lattices resulted in elastomer flow and filament agglomeration during drying.
NASA Astrophysics Data System (ADS)
Morfa, Carlos Recarey; Cortés, Lucía Argüelles; Farias, Márcio Muniz de; Morales, Irvin Pablo Pérez; Valera, Roberto Roselló; Oñate, Eugenio
2018-07-01
A methodology that comprises several characterization properties for particle packings is proposed in this paper. The methodology takes into account factors such as dimension and shape of particles, space occupation, homogeneity, connectivity and isotropy, among others. This classification and integration of several properties allows to carry out a characterization process to systemically evaluate the particle packings in order to guarantee the quality of the initial meshes in discrete element simulations, in both the micro- and the macroscales. Several new properties were created, and improvements in existing ones are presented. Properties from other disciplines were adapted to be used in the evaluation of particle systems. The methodology allows to easily characterize media at the level of the microscale (continuous geometries—steels, rocks microstructures, etc., and discrete geometries) and the macroscale. A global, systemic and integral system for characterizing and evaluating particle sets, based on fuzzy logic, is presented. Such system allows researchers to have a unique evaluation criterion based on the aim of their research. Examples of applications are shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Po-Yen; Chen, Liu; Institute for Fusion Theory and Simulation, Zhejiang University, 310027 Hangzhou
2015-09-15
The thermal relaxation time of a one-dimensional plasma has been demonstrated to scale with N{sub D}{sup 2} due to discrete particle effects by collisionless particle-in-cell (PIC) simulations, where N{sub D} is the particle number in a Debye length. The N{sub D}{sup 2} scaling is consistent with the theoretical analysis based on the Balescu-Lenard-Landau kinetic equation. However, it was found that the thermal relaxation time is anomalously shortened to scale with N{sub D} while externally introducing the Krook type collision model in the one-dimensional electrostatic PIC simulation. In order to understand the discrete particle effects enhanced by the Krook type collisionmore » model, the superposition principle of dressed test particles was applied to derive the modified Balescu-Lenard-Landau kinetic equation. The theoretical results are shown to be in good agreement with the simulation results when the collisional effects dominate the plasma system.« less
NASA Astrophysics Data System (ADS)
Morfa, Carlos Recarey; Cortés, Lucía Argüelles; Farias, Márcio Muniz de; Morales, Irvin Pablo Pérez; Valera, Roberto Roselló; Oñate, Eugenio
2017-10-01
A methodology that comprises several characterization properties for particle packings is proposed in this paper. The methodology takes into account factors such as dimension and shape of particles, space occupation, homogeneity, connectivity and isotropy, among others. This classification and integration of several properties allows to carry out a characterization process to systemically evaluate the particle packings in order to guarantee the quality of the initial meshes in discrete element simulations, in both the micro- and the macroscales. Several new properties were created, and improvements in existing ones are presented. Properties from other disciplines were adapted to be used in the evaluation of particle systems. The methodology allows to easily characterize media at the level of the microscale (continuous geometries—steels, rocks microstructures, etc., and discrete geometries) and the macroscale. A global, systemic and integral system for characterizing and evaluating particle sets, based on fuzzy logic, is presented. Such system allows researchers to have a unique evaluation criterion based on the aim of their research. Examples of applications are shown.
Effect of particle size distribution on the hydrodynamics of dense CFB risers
NASA Astrophysics Data System (ADS)
Bakshi, Akhilesh; Khanna, Samir; Venuturumilli, Raj; Altantzis, Christos; Ghoniem, Ahmed
2015-11-01
Circulating Fluidized Beds (CFB) are favorable in the energy and chemical industries, due to their high efficiency. While accurate hydrodynamic modeling is essential for optimizing performance, most CFB riser simulations are performed assuming equally-sized solid particles, owing to limited computational resources. Even though this approach yields reasonable predictions, it neglects commonly observed experimental findings suggesting the strong effect of particle size distribution (psd) on the hydrodynamics and chemical conversion. Thus, this study is focused on the inclusion of discrete particle sizes to represent the psd and its effect on fluidization via 2D numerical simulations. The particle sizes and corresponding mass fluxes are obtained using experimental data in dense CFB riser while the modeling framework is described in Bakshi et al 2015. Simulations are conducted at two scales: (a) fine grid to resolve heterogeneous structures and (b) coarse grid using EMMS sub-grid modifications. Using suitable metrics which capture bed dynamics, this study provides insights into segregation and mixing of particles as well as highlights need for improved sub-grid models.
Particle models for discrete element modeling of bulk grain properties of wheat kernels
USDA-ARS?s Scientific Manuscript database
Recent research has shown the potential of discrete element method (DEM) in simulating grain flow in bulk handling systems. Research has also revealed that simulation of grain flow with DEM requires establishment of appropriate particle models for each grain type. This research completes the three-p...
Neoclassical Simulation of Tokamak Plasmas using Continuum Gyrokinetc Code TEMPEST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, X Q
We present gyrokinetic neoclassical simulations of tokamak plasmas with self-consistent electric field for the first time using a fully nonlinear (full-f) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five dimensional computational grid in phase space. The present implementation is a Method of Lines approach where the phase-space derivatives are discretized with finite differences and implicit backwards differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving gyrokinetic Poisson equation with self-consistent poloidal variation. Withmore » our 4D ({psi}, {theta}, {epsilon}, {mu}) version of the TEMPEST code we compute radial particle and heat flux, the Geodesic-Acoustic Mode (GAM), and the development of neoclassical electric field, which we compare with neoclassical theory with a Lorentz collision model. The present work provides a numerical scheme and a new capability for self-consistently studying important aspects of neoclassical transport and rotations in toroidal magnetic fusion devices.« less
Frenning, Göran
2015-01-01
When the discrete element method (DEM) is used to simulate confined compression of granular materials, the need arises to estimate the void space surrounding each particle with Voronoi polyhedra. This entails recurring Voronoi tessellation with small changes in the geometry, resulting in a considerable computational overhead. To overcome this limitation, we propose a method with the following features:•A local determination of the polyhedron volume is used, which considerably simplifies implementation of the method.•A linear approximation of the polyhedron volume is utilised, with intermittent exact volume calculations when needed.•The method allows highly accurate volume estimates to be obtained at a considerably reduced computational cost. PMID:26150975
PLUME-MoM 1.0: A new integral model of volcanic plumes based on the method of moments
NASA Astrophysics Data System (ADS)
de'Michieli Vitturi, M.; Neri, A.; Barsotti, S.
2015-08-01
In this paper a new integral mathematical model for volcanic plumes, named PLUME-MoM, is presented. The model describes the steady-state dynamics of a plume in a 3-D coordinate system, accounting for continuous variability in particle size distribution of the pyroclastic mixture ejected at the vent. Volcanic plumes are composed of pyroclastic particles of many different sizes ranging from a few microns up to several centimeters and more. A proper description of such a multi-particle nature is crucial when quantifying changes in grain-size distribution along the plume and, therefore, for better characterization of source conditions of ash dispersal models. The new model is based on the method of moments, which allows for a description of the pyroclastic mixture dynamics not only in the spatial domain but also in the space of parameters of the continuous size distribution of the particles. This is achieved by formulation of fundamental transport equations for the multi-particle mixture with respect to the different moments of the grain-size distribution. Different formulations, in terms of the distribution of the particle number, as well as of the mass distribution expressed in terms of the Krumbein log scale, are also derived. Comparison between the new moments-based formulation and the classical approach, based on the discretization of the mixture in N discrete phases, shows that the new model allows for the same results to be obtained with a significantly lower computational cost (particularly when a large number of discrete phases is adopted). Application of the new model, coupled with uncertainty quantification and global sensitivity analyses, enables the investigation of the response of four key output variables (mean and standard deviation of the grain-size distribution at the top of the plume, plume height and amount of mass lost by the plume during the ascent) to changes in the main input parameters (mean and standard deviation) characterizing the pyroclastic mixture at the base of the plume. Results show that, for the range of parameters investigated and without considering interparticle processes such as aggregation or comminution, the grain-size distribution at the top of the plume is remarkably similar to that at the base and that the plume height is only weakly affected by the parameters of the grain distribution. The adopted approach can be potentially extended to the consideration of key particle-particle effects occurring in the plume including particle aggregation and fragmentation.
Algebraic perturbation theory for dense liquids with discrete potentials
NASA Astrophysics Data System (ADS)
Adib, Artur B.
2007-06-01
A simple theory for the leading-order correction g1(r) to the structure of a hard-sphere liquid with discrete (e.g., square-well) potential perturbations is proposed. The theory makes use of a general approximation that effectively eliminates four-particle correlations from g1(r) with good accuracy at high densities. For the particular case of discrete perturbations, the remaining three-particle correlations can be modeled with a simple volume-exclusion argument, resulting in an algebraic and surprisingly accurate expression for g1(r) . The structure of a discrete “core-softened” model for liquids with anomalous thermodynamic properties is reproduced as an application.
NASA Astrophysics Data System (ADS)
Iveson, Simon M.
2003-06-01
Pietruszczak and coworkers (Internat. J. Numer. Anal. Methods Geomech. 1994; 18(2):93-105; Comput. Geotech. 1991; 12( ):55-71) have presented a continuum-based model for predicting the dynamic mechanical response of partially saturated granular media with viscous interstitial liquids. In their model they assume that the gas phase is distributed uniformly throughout the medium as discrete spherical air bubbles occupying the voids between the particles. However, their derivation of the air pressure inside these gas bubbles is inconsistent with their stated assumptions. In addition the resultant dependence of gas pressure on liquid saturation lies outside of the plausible range of possible values for discrete air bubbles. This results in an over-prediction of the average bulk modulus of the void phase. Corrected equations are presented.
'Extremotaxis': computing with a bacterial-inspired algorithm.
Nicolau, Dan V; Burrage, Kevin; Nicolau, Dan V; Maini, Philip K
2008-01-01
We present a general-purpose optimization algorithm inspired by "run-and-tumble", the biased random walk chemotactic swimming strategy used by the bacterium Escherichia coli to locate regions of high nutrient concentration The method uses particles (corresponding to bacteria) that swim through the variable space (corresponding to the attractant concentration profile). By constantly performing temporal comparisons, the particles drift towards the minimum or maximum of the function of interest. We illustrate the use of our method with four examples. We also present a discrete version of the algorithm. The new algorithm is expected to be useful in combinatorial optimization problems involving many variables, where the functional landscape is apparently stochastic and has local minima, but preserves some derivative structure at intermediate scales.
Comprehensive T-Matrix Reference Database: A 2012 - 2013 Update
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Videen, Gorden; Khlebtsov, Nikolai G.; Wriedt, Thomas
2013-01-01
The T-matrix method is one of the most versatile, efficient, and accurate theoretical techniques widely used for numerically exact computer calculations of electromagnetic scattering by single and composite particles, discrete random media, and particles imbedded in complex environments. This paper presents the fifth update to the comprehensive database of peer-reviewed T-matrix publications initiated by us in 2004 and includes relevant publications that have appeared since 2012. It also lists several earlier publications not incorporated in the original database, including Peter Waterman's reports from the 1960s illustrating the history of the T-matrix approach and demonstrating that John Fikioris and Peter Waterman were the true pioneers of the multi-sphere method otherwise known as the generalized Lorenz - Mie theory.
High-performance multiprocessor architecture for a 3-D lattice gas model
NASA Technical Reports Server (NTRS)
Lee, F.; Flynn, M.; Morf, M.
1991-01-01
The lattice gas method has recently emerged as a promising discrete particle simulation method in areas such as fluid dynamics. We present a very high-performance scalable multiprocessor architecture, called ALGE, proposed for the simulation of a realistic 3-D lattice gas model, Henon's 24-bit FCHC isometric model. Each of these VLSI processors is as powerful as a CRAY-2 for this application. ALGE is scalable in the sense that it achieves linear speedup for both fixed and increasing problem sizes with more processors. The core computation of a lattice gas model consists of many repetitions of two alternating phases: particle collision and propagation. Functional decomposition by symmetry group and virtual move are the respective keys to efficient implementation of collision and propagation.
NASA Astrophysics Data System (ADS)
Sellers, Michael; Lisal, Martin; Schweigert, Igor; Larentzos, James; Brennan, John
2015-06-01
In discrete particle simulations, when an atomistic model is coarse-grained, a trade-off is made: a boost in computational speed for a reduction in accuracy. Dissipative Particle Dynamics (DPD) methods help to recover accuracy in viscous and thermal properties, while giving back a small amount of computational speed. One of the most notable extensions of DPD has been the introduction of chemical reactivity, called DPD-RX. Today, pairing the current evolution of DPD-RX with a coarse-grained potential and its chemical decomposition reactions allows for the simulation of the shock behavior of energetic materials at a timescale faster than an atomistic counterpart. In 2007, Maillet et al. introduced implicit chemical reactivity in DPD through the concept of particle reactors and simulated the decomposition of liquid nitromethane. We have recently extended the DPD-RX method and have applied it to solid hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) under shock conditions using a recently developed single-site coarse-grain model and a reduced RDX decomposition mechanism. A description of the methods used to simulate RDX and its tranition to hot product gases within DPD-RX will be presented. Additionally, examples of the effect of microstructure on shock behavior will be shown. Approved for public release. Distribution is unlimited.
3D Discrete element approach to the problem on abutment pressure in a gently dipping coal seam
NASA Astrophysics Data System (ADS)
Klishin, S. V.; Revuzhenko, A. F.
2017-09-01
Using the discrete element method, the authors have carried out 3D implementation of the problem on strength loss in surrounding rock mass in the vicinity of a production heading and on abutment pressure in a gently dripping coal seam. The calculation of forces at the contacts between particles accounts for friction, rolling resistance and viscosity. Between discrete particles modeling coal seam, surrounding rock mass and broken rocks, an elastic connecting element is introduced to allow simulating coherent materials. The paper presents the kinematic patterns of rock mass deformation, stresses in particles and the graph of the abutment pressure behavior in the coal seam.
Anderson, Kimberly R.; Anthony, T. Renée
2014-01-01
An understanding of how particles are inhaled into the human nose is important for developing samplers that measure biologically relevant estimates of exposure in the workplace. While previous computational mouth-breathing investigations of particle aspiration have been conducted in slow moving air, nose breathing still required exploration. Computational fluid dynamics was used to estimate nasal aspiration efficiency for an inhaling humanoid form in low velocity wind speeds (0.1–0.4 m s−1). Breathing was simplified as continuous inhalation through the nose. Fluid flow and particle trajectories were simulated over seven discrete orientations relative to the oncoming wind (0, 15, 30, 60, 90, 135, 180°). Sensitivities of the model simplification and methods were assessed, particularly the placement of the recessed nostril surface and the size of the nose. Simulations identified higher aspiration (13% on average) when compared to published experimental wind tunnel data. Significant differences in aspiration were identified between nose geometry, with the smaller nose aspirating an average of 8.6% more than the larger nose. Differences in fluid flow solution methods accounted for 2% average differences, on the order of methodological uncertainty. Similar trends to mouth-breathing simulations were observed including increasing aspiration efficiency with decreasing freestream velocity and decreasing aspiration with increasing rotation away from the oncoming wind. These models indicate nasal aspiration in slow moving air occurs only for particles <100 µm. PMID:24665111
A Conserving Discretization for the Free Boundary in a Two-Dimensional Stefan Problem
NASA Astrophysics Data System (ADS)
Segal, Guus; Vuik, Kees; Vermolen, Fred
1998-03-01
The dissolution of a disk-likeAl2Cuparticle is considered. A characteristic property is that initially the particle has a nonsmooth boundary. The mathematical model of this dissolution process contains a description of the particle interface, of which the position varies in time. Such a model is called a Stefan problem. It is impossible to obtain an analytical solution for a general two-dimensional Stefan problem, so we use the finite element method to solve this problem numerically. First, we apply a classical moving mesh method. Computations show that after some time steps the predicted particle interface becomes very unrealistic. Therefore, we derive a new method for the displacement of the free boundary based on the balance of atoms. This method leads to good results, also, for nonsmooth boundaries. Some numerical experiments are given for the dissolution of anAl2Cuparticle in anAl-Cualloy.
Real time visualization of quantum walk
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyazaki, Akihide; Hamada, Shinji; Sekino, Hideo
2014-02-20
Time evolution of quantum particles like electrons is described by time-dependent Schrödinger equation (TDSE). The TDSE is regarded as the diffusion equation of electrons with imaginary diffusion coefficients. And the TDSE is solved by quantum walk (QW) which is regarded as a quantum version of a classical random walk. The diffusion equation is solved in discretized space/time as in the case of classical random walk with additional unitary transformation of internal degree of freedom typical for quantum particles. We call the QW for solution of the TDSE a Schrödinger walk (SW). For observation of one quantum particle evolution under amore » given potential in atto-second scale, we attempt a successive computation and visualization of the SW. Using Pure Data programming, we observe the correct behavior of a probability distribution under the given potential in real time for observers of atto-second scale.« less
Numerical simulation of a full-loop circulating fluidized bed under different operating conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Yupeng; Musser, Jordan M.; Li, Tingwen
Both experimental and computational studies of the fluidization of high-density polyethylene (HDPE) particles in a small-scale full-loop circulating fluidized bed are conducted. Experimental measurements of pressure drop are taken at different locations along the bed. The solids circulation rate is measured with an advanced Particle Image Velocimetry (PIV) technique. The bed height of the quasi-static region in the standpipe is also measured. Comparative numerical simulations are performed with a Computational Fluid Dynamics solver utilizing a Discrete Element Method (CFD-DEM). This paper reports a detailed and direct comparison between CFD-DEM results and experimental data for realistic gas-solid fluidization in a full-loopmore » circulating fluidized bed system. The comparison reveals good agreement with respect to system component pressure drop and inventory height in the standpipe. In addition, the effect of different drag laws applied within the CFD simulation is examined and compared with experimental results.« less
NASA Astrophysics Data System (ADS)
Eriçok, Ozan Burak; Ertürk, Hakan
2018-07-01
Optical characterization of nanoparticle aggregates is a complex inverse problem that can be solved by deterministic or statistical methods. Previous studies showed that there exists a different lower size limit of reliable characterization, corresponding to the wavelength of light source used. In this study, these characterization limits are determined considering a light source wavelength range changing from ultraviolet to near infrared (266-1064 nm) relying on numerical light scattering experiments. Two different measurement ensembles are considered. Collection of well separated aggregates made up of same sized particles and that of having particle size distribution. Filippov's cluster-cluster algorithm is used to generate the aggregates and the light scattering behavior is calculated by discrete dipole approximation. A likelihood-free Approximate Bayesian Computation, relying on Adaptive Population Monte Carlo method, is used for characterization. It is found that when the wavelength range of 266-1064 nm is used, successful characterization limit changes from 21-62 nm effective radius for monodisperse and polydisperse soot aggregates.
NASA Astrophysics Data System (ADS)
Wright, Robyn; Thornberg, Steven M.
SEDIDAT is a series of compiled IBM-BASIC (version 2.0) programs that direct the collection, statistical calculation, and graphic presentation of particle settling velocity and equivalent spherical diameter for samples analyzed using the settling tube technique. The programs follow a menu-driven format that is understood easily by students and scientists with little previous computer experience. Settling velocity is measured directly (cm,sec) and also converted into Chi units. Equivalent spherical diameter (reported in Phi units) is calculated using a modified Gibbs equation for different particle densities. Input parameters, such as water temperature, settling distance, particle density, run time, and Phi;Chi interval are changed easily at operator discretion. Optional output to a dot-matrix printer includes a summary of moment and graphic statistical parameters, a tabulation of individual and cumulative weight percents, a listing of major distribution modes, and cumulative and histogram plots of a raw time, settling velocity. Chi and Phi data.
Novel Discrete Element Method for 3D non-spherical granular particles.
NASA Astrophysics Data System (ADS)
Seelen, Luuk; Padding, Johan; Kuipers, Hans
2015-11-01
Granular materials are common in many industries and nature. The different properties from solid behavior to fluid like behavior are well known but less well understood. The main aim of our work is to develop a discrete element method (DEM) to simulate non-spherical granular particles. The non-spherical shape of particles is important, as it controls the behavior of the granular materials in many situations, such as static systems of packed particles. In such systems the packing fraction is determined by the particle shape. We developed a novel 3D discrete element method that simulates the particle-particle interactions for a wide variety of shapes. The model can simulate quadratic shapes such as spheres, ellipsoids, cylinders. More importantly, any convex polyhedron can be used as a granular particle shape. These polyhedrons are very well suited to represent non-rounded sand particles. The main difficulty of any non-spherical DEM is the determination of particle-particle overlap. Our model uses two iterative geometric algorithms to determine the overlap. The algorithms are robust and can also determine multiple contact points which can occur for these shapes. With this method we are able to study different applications such as the discharging of a hopper or silo. Another application the creation of a random close packing, to determine the solid volume fraction as a function of the particle shape.
Cortical Neural Computation by Discrete Results Hypothesis
Castejon, Carlos; Nuñez, Angel
2016-01-01
One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called “Discrete Results” (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of “Discrete Results” is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel “Discrete Results” concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast-spiking (FS) interneuron may be a key element in our hypothesis providing the basis for this computation. PMID:27807408
Cortical Neural Computation by Discrete Results Hypothesis.
Castejon, Carlos; Nuñez, Angel
2016-01-01
One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called "Discrete Results" (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of "Discrete Results" is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel "Discrete Results" concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast-spiking (FS) interneuron may be a key element in our hypothesis providing the basis for this computation.
NASA Astrophysics Data System (ADS)
Shirsath, Sushil; Padding, Johan; Clercx, Herman; Kuipers, Hans
2013-11-01
In blast furnaces operated in the steel industry, particles like coke, sinter and pellets enter from a hopper and are distributed on the burden surface by a rotating chute. Such particulate flows suffer occasionally from particle segregation in chute, which hinders efficient throughflow. To obtain a more fundamental insight into these effects, monodisperse particles flowing through a rotating chute inclined at a fixed angle has been studied both with experiments and with a discrete particle model. We observe that the prevailing flow patterns depend strongly on the rotation rate of the chute. With increasing rotation rate the particles are moving increasingly to the side wall. The streamwise particle velocity is slightly reduced in the first half length of the chute due to the Coriolis force, but strongly increased in the second half due to the centrifugal forces. The particle bed height becomes a two-dimensional function of the position inside the chute, with a strong increase in bed height along the sidewall due to the Coriolis forces. It was found that the DPM model was agreed well with the experimental measurements. We will also discuss ongoing work, where we investigate the effects of binary particle mixtures with different particle size or density, different chute geometry.
Extinction efficiencies from DDA calculations solved for finite circular cylinders and disks
NASA Technical Reports Server (NTRS)
Withrow, J. R.; Cox, S. K.
1993-01-01
One of the most commonly noted uncertainties with respect to the modeling of cirrus clouds and their effect upon the planetary radiation balance is the disputed validity of the use of Mie scattering results as an approximation to the scattering results of the hexagonal plates and columns found in cirrus clouds. This approximation has historically been a kind of default, a result of the lack of an appropriate analytical solution of Maxwell's equations to particles other than infinite cylinders and spheroids. Recently, however, the use of such approximate techniques as the Discrete Dipole Approximation has made scattering solutions on such particles a computationally intensive but feasible possibility. In this study, the Discrete Dipole Approximation (DDA) developed by Flatau (1992) is used to find such solutions for homogeneous, circular cylinders and disks. This can serve to not only assess the validity of the current radiative transfer schemes which are available for the study of cirrus but also to extend the current approximation of equivalent spheres to an approximation of second order, homogeneous finite circular cylinders and disks. The results will be presented in the form of a single variable, the extinction efficiency.
Variational formulation of macroparticle models for electromagnetic plasma simulations
Stamm, Alexander B.; Shadwick, Bradley A.; Evstatiev, Evstati G.
2014-06-01
A variational method is used to derive a self-consistent macroparticle model for relativistic electromagnetic kinetic plasma simulations. Extending earlier work, discretization of the electromagnetic Low Lagrangian is performed via a reduction of the phase-space distribution function onto a collection of finite-sized macroparticles of arbitrary shape and discretization of field quantities onto a spatial grid. This approach may be used with lab frame coordinates or moving window coordinates; the latter can greatly improve computational efficiency for studying some types of laser-plasma interactions. The primary advantage of the variational approach is the preservation of Lagrangian symmetries, which in our case leads tomore » energy conservation and thus avoids difficulties with grid heating. In addition, this approach decouples particle size from grid spacing and relaxes restrictions on particle shape, leading to low numerical noise. The variational approach also guarantees consistent approximations in the equations of motion and is amenable to higher order methods in both space and time. We restrict our attention to the 1.5-D case (one coordinate and two momenta). Lastly, simulations are performed with the new models and demonstrate energy conservation and low noise.« less
Repelling, binding, and oscillating of two-particle discrete-time quantum walks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qinghao; Li, Zhi-Jian, E-mail: zjli@sxu.edu.cn
In this paper, we investigate the effects of particle–particle interaction and static force on the propagation of probability distribution in two-particle discrete-time quantum walk, where the interaction and static force are expressed as a collision phase and a linear position-dependent phase, respectively. It is found that the interaction can lead to boson repelling and fermion binding. The static force also induces Bloch oscillation and results in a continuous transition from boson bunching to fermion anti-bunching. The interplays of particle–particle interaction, quantum interference, and Bloch oscillation provide a versatile framework to study and simulate many-particle physics via quantum walks.
Computation of Symmetric Discrete Cosine Transform Using Bakhvalov's Algorithm
NASA Technical Reports Server (NTRS)
Aburdene, Maurice F.; Strojny, Brian C.; Dorband, John E.
2005-01-01
A number of algorithms for recursive computation of the discrete cosine transform (DCT) have been developed recently. This paper presents a new method for computing the discrete cosine transform and its inverse using Bakhvalov's algorithm, a method developed for evaluation of a polynomial at a point. In this paper, we will focus on both the application of the algorithm to the computation of the DCT-I and its complexity. In addition, Bakhvalov s algorithm is compared with Clenshaw s algorithm for the computation of the DCT.
Combustion and flow modelling applied to the OMV VTE
NASA Technical Reports Server (NTRS)
Larosiliere, Louis M.; Jeng, San-Mou
1990-01-01
A predictive tool for hypergolic bipropellant spray combustion and flow evolution in the OMV VTE (orbital maneuvering vehicle variable thrust engine) is described. It encompasses a computational technique for the gas phase governing equations, a discrete particle method for liquid bipropellant sprays, and constitutive models for combustion chemistry, interphase exchanges, and unlike impinging liquid hypergolic stream interactions. Emphasis is placed on the phenomenological modelling of the hypergolic liquid bipropellant gasification processes. An application to the OMV VTE combustion chamber is given in order to show some of the capabilities and inadequacies of this tool.
Yang, Jin; Liu, Fagui; Cao, Jianneng; Wang, Liangming
2016-01-01
Mobile sinks can achieve load-balancing and energy-consumption balancing across the wireless sensor networks (WSNs). However, the frequent change of the paths between source nodes and the sinks caused by sink mobility introduces significant overhead in terms of energy and packet delays. To enhance network performance of WSNs with mobile sinks (MWSNs), we present an efficient routing strategy, which is formulated as an optimization problem and employs the particle swarm optimization algorithm (PSO) to build the optimal routing paths. However, the conventional PSO is insufficient to solve discrete routing optimization problems. Therefore, a novel greedy discrete particle swarm optimization with memory (GMDPSO) is put forward to address this problem. In the GMDPSO, particle’s position and velocity of traditional PSO are redefined under discrete MWSNs scenario. Particle updating rule is also reconsidered based on the subnetwork topology of MWSNs. Besides, by improving the greedy forwarding routing, a greedy search strategy is designed to drive particles to find a better position quickly. Furthermore, searching history is memorized to accelerate convergence. Simulation results demonstrate that our new protocol significantly improves the robustness and adapts to rapid topological changes with multiple mobile sinks, while efficiently reducing the communication overhead and the energy consumption. PMID:27428971
Stable discrete representation of relativistically drifting plasmas
Kirchen, M.; Lehe, R.; Godfrey, B. B.; ...
2016-10-10
Representing the electrodynamics of relativistically drifting particle ensembles in discrete, co-propagating Galilean coordinates enables the derivation of a Particle-In-Cell algorithm that is intrinsically free of the numerical Cherenkov instability for plasmas flowing at a uniform velocity. Application of the method is shown by modeling plasma accelerators in a Lorentz-transformed optimal frame of reference.
Stable discrete representation of relativistically drifting plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirchen, M.; Lehe, R.; Godfrey, B. B.
Representing the electrodynamics of relativistically drifting particle ensembles in discrete, co-propagating Galilean coordinates enables the derivation of a Particle-In-Cell algorithm that is intrinsically free of the numerical Cherenkov instability for plasmas flowing at a uniform velocity. Application of the method is shown by modeling plasma accelerators in a Lorentz-transformed optimal frame of reference.
Statistically Based Morphodynamic Modeling of Tracer Slowdown
NASA Astrophysics Data System (ADS)
Borhani, S.; Ghasemi, A.; Hill, K. M.; Viparelli, E.
2017-12-01
Tracer particles are used to study bedload transport in gravel-bed rivers. One of the advantages associated with using of tracer particles is that they allow for direct measures of the entrainment rates and their size distributions. The main issue in large scale studies with tracer particles is the difference between tracer stone short term and long term behavior. This difference is due to the fact that particles undergo vertical mixing or move to less active locations such as bars or even floodplains. For these reasons the average virtual velocity of tracer particle decreases in time, i.e. the tracer slowdown. In summary, tracer slowdown can have a significant impact on the estimation of bedload transport rate or long term dispersal of contaminated sediment. The vast majority of the morphodynamic models that account for the non-uniformity of the bed material (tracer and not tracer, in this case) are based on a discrete description of the alluvial deposit. The deposit is divided in two different regions; the active layer and the substrate. The active layer is a thin layer in the topmost part of the deposit whose particles can interact with the bed material transport. The substrate is the part of the deposit below the active layer. Due to the discrete representation of the alluvial deposit, active layer models are not able to reproduce tracer slowdown. In this study we try to model the slowdown of tracer particles with the continuous Parker-Paola-Leclair morphodynamic framework. This continuous, i.e. not layer-based, framework is based on a stochastic description of the temporal variation of bed surface elevation, and of the elevation specific particle entrainment and deposition. Particle entrainment rates are computed as a function of the flow and sediment characteristics, while particle deposition is estimated with a step length formulation. Here we present one of the first implementation of the continuum framework at laboratory scale, its validation against laboratory data and then we attempt to use the validated model to describe the tracer long-term slowdown.
NASA Technical Reports Server (NTRS)
Johnson, B. T.; Olson, W. S.; Skofronick-Jackson, G.
2016-01-01
A simplified approach is presented for assessing the microwave response to the initial melting of realistically shaped ice particles. This paper is divided into two parts: (1) a description of the Single Particle Melting Model (SPMM), a heuristic melting simulation for ice-phase precipitation particles of any shape or size (SPMM is applied to two simulated aggregate snow particles, simulating melting up to 0.15 melt fraction by mass), and (2) the computation of the single-particle microwave scattering and extinction properties of these hydrometeors, using the discrete dipole approximation (via DDSCAT), at the following selected frequencies: 13.4, 35.6, and 94.0GHz for radar applications and 89, 165.0, and 183.31GHz for radiometer applications. These selected frequencies are consistent with current microwave remote-sensing platforms, such as CloudSat and the Global Precipitation Measurement (GPM) mission. Comparisons with calculations using variable-density spheres indicate significant deviations in scattering and extinction properties throughout the initial range of melting (liquid volume fractions less than 0.15). Integration of the single-particle properties over an exponential particle size distribution provides additional insight into idealized radar reflectivity and passive microwave brightness temperature sensitivity to variations in size/mass, shape, melt fraction, and particle orientation.
1990-06-01
The objective of this thesis research is to create a tutorial for teaching aspects of undirected graphs in discrete math . It is one of the submodules...of the Discrete Math Tutorial (DMT), which is a Computer Aided Instructional (CAI) tool for teaching discrete math to the Naval Academy and the
1990-06-01
The objective of this thesis research is to create a tutorial for teaching aspects of undirected graphs in discrete math . It is one of the submodules...of the Discrete Math Tutorial (DMT), which is a Computer Aided Instructional (CAI) tool for teaching discrete math to the Naval Academy and the
Convergence studies in meshfree peridynamic simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seleson, Pablo; Littlewood, David J.
2016-04-15
Meshfree methods are commonly applied to discretize peridynamic models, particularly in numerical simulations of engineering problems. Such methods discretize peridynamic bodies using a set of nodes with characteristic volume, leading to particle-based descriptions of systems. In this article, we perform convergence studies of static peridynamic problems. We show that commonly used meshfree methods in peridynamics suffer from accuracy and convergence issues, due to a rough approximation of the contribution to the internal force density of nodes near the boundary of the neighborhood of a given node. We propose two methods to improve meshfree peridynamic simulations. The first method uses accuratemore » computations of volumes of intersections between neighbor cells and the neighborhood of a given node, referred to as partial volumes. The second method employs smooth influence functions with a finite support within peridynamic kernels. Numerical results demonstrate great improvements in accuracy and convergence of peridynamic numerical solutions, when using the proposed methods.« less
a Novel Discrete Optimal Transport Method for Bayesian Inverse Problems
NASA Astrophysics Data System (ADS)
Bui-Thanh, T.; Myers, A.; Wang, K.; Thiery, A.
2017-12-01
We present the Augmented Ensemble Transform (AET) method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesian inverse problems. Solving large-scale inverse problems is critical for some of the most relevant and impactful scientific endeavors of our time. Therefore, constructing novel methods for solving the Bayesian inverse problem in more computationally efficient ways can have a profound impact on the science community. This research derives the novel AET method for exploring a posterior by solving a sequence of linear programming problems, resulting in a series of transport maps which map prior samples to posterior samples, allowing for the computation of moments of the posterior. We show both theoretical and numerical results, indicating this method can offer superior computational efficiency when compared to other SMC methods. Most of this efficiency is derived from matrix scaling methods to solve the linear programming problem and derivative-free optimization for particle movement. We use this method to determine inter-well connectivity in a reservoir and the associated uncertainty related to certain parameters. The attached file shows the difference between the true parameter and the AET parameter in an example 3D reservoir problem. The error is within the Morozov discrepancy allowance with lower computational cost than other particle methods.
Design and Analysis of Self-Adapted Task Scheduling Strategies in Wireless Sensor Networks
Guo, Wenzhong; Xiong, Naixue; Chao, Han-Chieh; Hussain, Sajid; Chen, Guolong
2011-01-01
In a wireless sensor network (WSN), the usage of resources is usually highly related to the execution of tasks which consume a certain amount of computing and communication bandwidth. Parallel processing among sensors is a promising solution to provide the demanded computation capacity in WSNs. Task allocation and scheduling is a typical problem in the area of high performance computing. Although task allocation and scheduling in wired processor networks has been well studied in the past, their counterparts for WSNs remain largely unexplored. Existing traditional high performance computing solutions cannot be directly implemented in WSNs due to the limitations of WSNs such as limited resource availability and the shared communication medium. In this paper, a self-adapted task scheduling strategy for WSNs is presented. First, a multi-agent-based architecture for WSNs is proposed and a mathematical model of dynamic alliance is constructed for the task allocation problem. Then an effective discrete particle swarm optimization (PSO) algorithm for the dynamic alliance (DPSO-DA) with a well-designed particle position code and fitness function is proposed. A mutation operator which can effectively improve the algorithm’s ability of global search and population diversity is also introduced in this algorithm. Finally, the simulation results show that the proposed solution can achieve significant better performance than other algorithms. PMID:22163971
Experimental and Computational Study of Multiphase Flow Hydrodynamics in 2D Trickle Bed Reactors
NASA Astrophysics Data System (ADS)
Nadeem, H.; Ben Salem, I.; Kurnia, J. C.; Rabbani, S.; Shamim, T.; Sassi, M.
2014-12-01
Trickle bed reactors are largely used in the refining processes. Co-current heavy oil and hydrogen gas flow downward on catalytic particle bed. Fine particles in the heavy oil and/or soot formed by the exothermic catalytic reactions deposit on the bed and clog the flow channels. This work is funded by the refining company of Abu Dhabi and aims at mitigating pressure buildup due to fine deposition in the TBR. In this work, we focus on meso-scale experimental and computational investigations of the interplay between flow regimes and the various parameters that affect them. A 2D experimental apparatus has been built to investigate the flow regimes with an average pore diameter close to the values encountered in trickle beds. A parametric study is done for the development of flow regimes and the transition between them when the geometry and arrangement of the particles within the porous medium are varied. Liquid and gas flow velocities have also been varied to capture the different flow regimes. Real time images of the multiphase flow are captured using a high speed camera, which were then used to characterize the transition between the different flow regimes. A diffused light source was used behind the 2D Trickle Bed Reactor to enhance visualizations. Experimental data shows very good agreement with the published literature. The computational study focuses on the hydrodynamics of multiphase flow and to identify the flow regime developed inside TBRs using the ANSYS Fluent Software package. Multiphase flow inside TBRs is investigated using the "discrete particle" approach together with Volume of Fluid (VoF) multiphase flow modeling. The effect of the bed particle diameter, spacing, and arrangement are presented that may be used to provide guidelines for designing trickle bed reactors.
Optical potential from first principles
Rotureau, J.; Danielewicz, P.; Hagen, G.; ...
2017-02-15
Here, we develop a method to construct a microscopic optical potential from chiral interactions for nucleon-nucleus scattering. The optical potential is constructed by combining the Green’s function approach with the coupled-cluster method. To deal with the poles of the Green’s function along the real energy axis we employ a Berggren basis in the complex energy plane combined with the Lanczos method. Using this approach, we perform a proof-of-principle calculation of the optical potential for the elastic neutron scattering on 16O. For the computation of the ground-state of 16O, we use the coupled-cluster method in the singles-and-doubles approximation, while for themore » A ±1 nuclei we use particle-attached/removed equation-of-motion method truncated at two-particle-one-hole and one-particle-two-hole excitations, respectively. We verify the convergence of the optical potential and scattering phase shifts with respect to the model-space size and the number of discretized complex continuum states. We also investigate the absorptive component of the optical potential (which reflects the opening of inelastic channels) by computing its imaginary volume integral and find an almost negligible absorptive component at low-energies. To shed light on this result, we computed excited states of 16O using equation-of-motion coupled-cluster method with singles-and- doubles excitations and we found no low-lying excited states below 10 MeV. Furthermore, most excited states have a dominant two-particle-two-hole component, making higher-order particle-hole excitations necessary to achieve a precise description of these core-excited states. We conclude that the reduced absorption at low-energies can be attributed to the lack of correlations coming from the low-order cluster truncation in the employed coupled-cluster method.« less
Iterons, fractals and computations of automata
NASA Astrophysics Data System (ADS)
Siwak, Paweł
1999-03-01
Processing of strings by some automata, when viewed on space-time (ST) diagrams, reveals characteristic soliton-like coherent periodic objects. They are inherently associated with iterations of automata mappings thus we call them the iterons. In the paper we present two classes of one-dimensional iterons: particles and filtrons. The particles are typical for parallel (cellular) processing, while filtrons, introduced in (32) are specific for serial processing of strings. In general, the images of iterated automata mappings exhibit not only coherent entities but also the fractals, and quasi-periodic and chaotic dynamics. We show typical images of such computations: fractals, multiplication by a number, and addition of binary numbers defined by a Turing machine. Then, the particles are presented as iterons generated by cellular automata in three computations: B/U code conversion (13, 29), majority classification (9), and in discrete version of the FPU (Fermi-Pasta-Ulam) dynamics (7, 23). We disclose particles by a technique of combinational recoding of ST diagrams (as opposed to sequential recoding). Subsequently, we recall the recursive filters based on FCA (filter cellular automata) window operators, and considered by Park (26), Ablowitz (1), Fokas (11), Fuchssteiner (12), Bruschi (5) and Jiang (20). We present the automata equivalents to these filters (33). Some of them belong to the class of filter automata introduced in (30). We also define and illustrate some properties of filtrons. Contrary to particles, the filtrons interact nonlocally in the sense that distant symbols may influence one another. Thus their interactions are very unusual. Some examples have been given in (32). Here we show new examples of filtron phenomena: multifiltron solitonic collisions, attracting and repelling filtrons, trapped bouncing filtrons (which behave like a resonance cavity) and quasi filtrons.
Biomolecular Assembly of Gold Nanocrystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Micheel, Christine Marya
2005-05-20
Over the past ten years, methods have been developed to construct discrete nanostructures using nanocrystals and biomolecules. While these frequently consist of gold nanocrystals and DNA, semiconductor nanocrystals as well as antibodies and enzymes have also been used. One example of discrete nanostructures is dimers of gold nanocrystals linked together with complementary DNA. This type of nanostructure is also known as a nanocrystal molecule. Discrete nanostructures of this kind have a number of potential applications, from highly parallel self-assembly of electronics components and rapid read-out of DNA computations to biological imaging and a variety of bioassays. My research focused inmore » three main areas. The first area, the refinement of electrophoresis as a purification and characterization method, included application of agarose gel electrophoresis to the purification of discrete gold nanocrystal/DNA conjugates and nanocrystal molecules, as well as development of a more detailed understanding of the hydrodynamic behavior of these materials in gels. The second area, the development of methods for quantitative analysis of transmission electron microscope data, used computer programs written to find pair correlations as well as higher order correlations. With these programs, it is possible to reliably locate and measure nanocrystal molecules in TEM images. The final area of research explored the use of DNA ligase in the formation of nanocrystal molecules. Synthesis of dimers of gold particles linked with a single strand of DNA possible through the use of DNA ligase opens the possibility for amplification of nanostructures in a manner similar to polymerase chain reaction. These three areas are discussed in the context of the work in the Alivisatos group, as well as the field as a whole.« less
Pion-less effective field theory for real and lattice nuclei
NASA Astrophysics Data System (ADS)
Bansal, Aaina; Binder, Sven; Ekström, Andreas; Hagen, Gaute; Papenbrock, Thomas
2017-09-01
We compute the medium-heavy nuclei 16O and 40Ca using pion-less effective field theory (EFT) at leading order (LO) and next-to-leading order (NLO). The low-energy coefficients of the EFT Hamiltonian are adjusted to A = 2 , 3 nuclei data from experiments, or alternatively to data from lattice QCD at unphysical pion mass mπ = 806 MeV. The EFT is implemented through discrete variable representation of finite harmonic oscillator basis. This approach ensures rapid convergence with respect to the size of the model space and allows us to compute heavier atomic and lattice nuclei. The atomic nuclei 16O and 40Ca are bound with respect to decay into alpha particles at NLO, but not at LO.
NASA Astrophysics Data System (ADS)
Rahbaralam, Maryam; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier
2015-12-01
Random walk particle tracking methods are a computationally efficient family of methods to solve reactive transport problems. While the number of particles in most realistic applications is in the order of 106-109, the number of reactive molecules even in diluted systems might be in the order of fractions of the Avogadro number. Thus, each particle actually represents a group of potentially reactive molecules. The use of a low number of particles may result not only in loss of accuracy, but also may lead to an improper reproduction of the mixing process, limited by diffusion. Recent works have used this effect as a proxy to model incomplete mixing in porous media. In this work, we propose using a Kernel Density Estimation (KDE) of the concentrations that allows getting the expected results for a well-mixed solution with a limited number of particles. The idea consists of treating each particle as a sample drawn from the pool of molecules that it represents; this way, the actual location of a tracked particle is seen as a sample drawn from the density function of the location of molecules represented by that given particle, rigorously represented by a kernel density function. The probability of reaction can be obtained by combining the kernels associated to two potentially reactive particles. We demonstrate that the observed deviation in the reaction vs time curves in numerical experiments reported in the literature could be attributed to the statistical method used to reconstruct concentrations (fixed particle support) from discrete particle distributions, and not to the occurrence of true incomplete mixing. We further explore the evolution of the kernel size with time, linking it to the diffusion process. Our results show that KDEs are powerful tools to improve computational efficiency and robustness in reactive transport simulations, and indicates that incomplete mixing in diluted systems should be modeled based on alternative mechanistic models and not on a limited number of particles.
Higher-order adaptive finite-element methods for Kohn–Sham density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Motamarri, P.; Nowak, M.R.; Leiter, K.
2013-11-15
We present an efficient computational approach to perform real-space electronic structure calculations using an adaptive higher-order finite-element discretization of Kohn–Sham density-functional theory (DFT). To this end, we develop an a priori mesh-adaption technique to construct a close to optimal finite-element discretization of the problem. We further propose an efficient solution strategy for solving the discrete eigenvalue problem by using spectral finite-elements in conjunction with Gauss–Lobatto quadrature, and a Chebyshev acceleration technique for computing the occupied eigenspace. The proposed approach has been observed to provide a staggering 100–200-fold computational advantage over the solution of a generalized eigenvalue problem. Using the proposedmore » solution procedure, we investigate the computational efficiency afforded by higher-order finite-element discretizations of the Kohn–Sham DFT problem. Our studies suggest that staggering computational savings—of the order of 1000-fold—relative to linear finite-elements can be realized, for both all-electron and local pseudopotential calculations, by using higher-order finite-element discretizations. On all the benchmark systems studied, we observe diminishing returns in computational savings beyond the sixth-order for accuracies commensurate with chemical accuracy, suggesting that the hexic spectral-element may be an optimal choice for the finite-element discretization of the Kohn–Sham DFT problem. A comparative study of the computational efficiency of the proposed higher-order finite-element discretizations suggests that the performance of finite-element basis is competing with the plane-wave discretization for non-periodic local pseudopotential calculations, and compares to the Gaussian basis for all-electron calculations to within an order of magnitude. Further, we demonstrate the capability of the proposed approach to compute the electronic structure of a metallic system containing 1688 atoms using modest computational resources, and good scalability of the present implementation up to 192 processors.« less
PAM: Particle automata model in simulation of Fusarium graminearum pathogen expansion.
Wcisło, Rafał; Miller, S Shea; Dzwinel, Witold
2016-01-21
The multi-scale nature and inherent complexity of biological systems are a great challenge for computer modeling and classical modeling paradigms. We present a novel particle automata modeling metaphor in the context of developing a 3D model of Fusarium graminearum infection in wheat. The system consisting of the host plant and Fusarium pathogen cells can be represented by an ensemble of discrete particles defined by a set of attributes. The cells-particles can interact with each other mimicking mechanical resistance of the cell walls and cell coalescence. The particles can move, while some of their attributes can be changed according to prescribed rules. The rules can represent cellular scales of a complex system, while the integrated particle automata model (PAM) simulates its overall multi-scale behavior. We show that due to the ability of mimicking mechanical interactions of Fusarium tip cells with the host tissue, the model is able to simulate realistic penetration properties of the colonization process reproducing both vertical and lateral Fusarium invasion scenarios. The comparison of simulation results with micrographs from laboratory experiments shows encouraging qualitative agreement between the two. Copyright © 2015 Elsevier Ltd. All rights reserved.
Transport and deposition of cohesive pharmaceutical powders in human airway
NASA Astrophysics Data System (ADS)
Wang, Yuan; Chu, Kaiwei; Yu, Aibing
2017-06-01
Pharmaceutical powders used in inhalation therapy are in the size range of 1-5 microns and are usually cohesive. Understanding the cohesive behaviour of pharmaceutical powders during their transportation in human airway is significant in optimising aerosol drug delivery and targeting. In this study, the transport and deposition of cohesive pharmaceutical powders in a human airway model is simulated by a well-established numerical model which combines computational fluid dynamics (CFD) and discrete element method (DEM). The van der Waals force, as the dominant cohesive force, is simulated and its influence on particle transport and deposition behaviour is discussed. It is observed that even for dilute particle flow, the local particle concentration in the oral to trachea region can be high and particle aggregation happens due to the van der Waals force of attraction. It is concluded that the deposition mechanism for cohesive pharmaceutical powders, on one hand, is dominated by particle inertial impaction, as proven by previous studies; on the other hand, is significantly affected by particle aggregation induced by van der Waals force. To maximum respiratory drug delivery efficiency, efforts should be made to avoid pharmaceutical powder aggregation in human oral-to-trachea airway.
NASA Astrophysics Data System (ADS)
Tripathi, Anurag; Khakhar, D. V.
2010-04-01
We study smooth, slightly inelastic particles flowing under gravity on a bumpy inclined plane using event-driven and discrete-element simulations. Shallow layers (ten particle diameters) are used to enable simulation using the event-driven method within reasonable computational times. Steady flows are obtained in a narrow range of angles (13°-14.5°) ; lower angles result in stopping of the flow and higher angles in continuous acceleration. The flow is relatively dense with the solid volume fraction, ν≈0.5 , and significant layering of particles is observed. We derive expressions for the stress, heat flux, and dissipation for the hard and soft particle models from first principles. The computed mean velocity, temperature, stress, dissipation, and heat flux profiles of hard particles are compared to soft particle results for different values of stiffness constant (k) . The value of stiffness constant for which results for hard and soft particles are identical is found to be k≥2×106mg/d , where m is the mass of a particle, g is the acceleration due to gravity, and d is the particle diameter. We compare the simulation results to constitutive relations obtained from the kinetic theory of Jenkins and Richman [J. T. Jenkins and M. W. Richman, Arch. Ration. Mech. Anal. 87, 355 (1985)] for pressure, dissipation, viscosity, and thermal conductivity. We find that all the quantities are very well predicted by kinetic theory for volume fractions ν<0.5 . At higher densities, obtained for thicker layers ( H=15d and H=20d ), the kinetic theory does not give accurate prediction. Deviations of the kinetic theory predictions from simulation results are relatively small for dissipation and heat flux and most significant deviations are observed for shear viscosity and pressure. The results indicate the range of applicability of soft particle simulations and kinetic theory for dense flows.
In Praise of Numerical Computation
NASA Astrophysics Data System (ADS)
Yap, Chee K.
Theoretical Computer Science has developed an almost exclusively discrete/algebraic persona. We have effectively shut ourselves off from half of the world of computing: a host of problems in Computational Science & Engineering (CS&E) are defined on the continuum, and, for them, the discrete viewpoint is inadequate. The computational techniques in such problems are well-known to numerical analysis and applied mathematics, but are rarely discussed in theoretical algorithms: iteration, subdivision and approximation. By various case studies, I will indicate how our discrete/algebraic view of computing has many shortcomings in CS&E. We want embrace the continuous/analytic view, but in a new synthesis with the discrete/algebraic view. I will suggest a pathway, by way of an exact numerical model of computation, that allows us to incorporate iteration and approximation into our algorithms’ design. Some recent results give a peek into how this view of algorithmic development might look like, and its distinctive form suggests the name “numerical computational geometry” for such activities.
NASA Astrophysics Data System (ADS)
Mishchenko, Michael I.
2017-01-01
The second - revised and enlarged - edition of this popular monograph is co-authored by Michael Kahnert and is published as Volume 145 of the Springer Series in Optical Sciences. As in the first edition, the main emphasis is on the mathematics of electromagnetic scattering and on numerically exact computer solutions of the frequency-domain macroscopic Maxwell equations for particles with complex shapes. The book is largely centered on Green-function solution of relevant boundary value problems and the T-matrix methodology, although other techniques (the method of lines, integral equation methods, and Lippmann-Schwinger equations) are also covered. The first four chapters serve as a thorough overview of key theoretical aspects of electromagnetic scattering intelligible to readers with undergraduate training in mathematics. A separate chapter provides an instructive analysis of the Rayleigh hypothesis which is still viewed by many as a highly controversial aspect of electromagnetic scattering by nonspherical objects. Another dedicated chapter introduces basic quantities serving as optical observables in practical applications. A welcome extension of the first edition is the new chapter on group theoretical aspects of electromagnetic scattering by particles with discrete symmetries. An essential part of the book is the penultimate chapter describing in detail popular public-domain computer programs mieschka and Tsym which can be applied to a wide range of particle shapes. The final chapter provides a general overview of available literature on electromagnetic scattering by particles and gives useful reading advice.
Analyses of Cometary Silicate Crystals: DDA Spectral Modeling of Forsterite
NASA Technical Reports Server (NTRS)
Wooden, Diane
2012-01-01
Comets are the Solar System's deep freezers of gases, ices, and particulates that were present in the outer protoplanetary disk. Where comet nuclei accreted was so cold that CO ice (approximately 50K) and other supervolatile ices like ethane (C2H2) were preserved. However, comets also accreted high temperature minerals: silicate crystals that either condensed (greater than or equal to 1400 K) or that were annealed from amorphous (glassy) silicates (greater than 850-1000 K). By their rarity in the interstellar medium, cometary crystalline silicates are thought to be grains that formed in the inner disk and were then radially transported out to the cold and ice-rich regimes near Neptune. The questions that comets can potentially address are: How fast, how far, and over what duration were crystals that formed in the inner disk transported out to the comet-forming region(s)? In comets, the mass fractions of silicates that are crystalline, f_cryst, translate to benchmarks for protoplanetary disk radial transport models. The infamous comet Hale-Bopp has crystalline fractions of over 55%. The values for cometary crystalline mass fractions, however, are derived assuming that the mineralogy assessed for the submicron to micron-sized portion of the size distribution represents the compositional makeup of all larger grains in the coma. Models for fitting cometary SEDs make this assumption because models can only fit the observed features with submicron to micron-sized discrete crystals. On the other hand, larger (0.1-100 micrometer radii) porous grains composed of amorphous silicates and amorphous carbon can be easily computed with mixed medium theory wherein vacuum mixed into a spherical particle mimics a porous aggregate. If crystalline silicates are mixed in, the models completely fail to match the observations. Moreover, models for a size distribution of discrete crystalline forsterite grains commonly employs the CDE computational method for ellipsoidal platelets (c:a:b=8.14x8.14xl in shape with geometrical factors of x:y:z=1:1:10, Fabian et al. 2001; Harker et al. 2007). Alternatively, models for forsterite employ statistical methods like the Distribution of Hollow Spheres (Min et al. 2008; Oliveira et al. 2011) or Gaussian Random Spheres (GRS) or RGF (Gielen et al. 200S). Pancakes, hollow spheres, or GRS shapes similar to wheat sheaf crystal habit (e.g., Volten et al. 2001; Veihelmann et al. 2006), however, do not have the sharp edges, flat faces, and vertices seen in images of cometary crystals in interplanetary dust particles (IDPs) or in Stardust samples. Cometary forsterite crystals often have equant or tabular crystal habit (J. Bradley). To simulate cometary crystals, we have computed absorption efficiencies of forsterite using the Discrete Dipole Approximation (DDA) DDSCAT code on NAS supercomputers. We compute thermal models that employ a size distribution of discrete irregularly shaped forsterite crystals (nonspherical shapes with faces and vertices) to explore how crystal shape affects the shape and wavelength positions of the forsterite spectral features and to explore whether cometary crystal shapes support either condensation or annealing scenarios (Lindsay et al. 2012a, b). We find forsterite crystal shapes that best-fit comet Hale-Bopp are tetrahedron, bricks or brick platelets, essentially equant or tabular (Lindsay et al. 2012a,b), commensurate with high temperature condensation experiments (Kobatake et al. 2008). We also have computed porous aggregates with crystal monomers and find that the crystal resonances are amplified. i.e., the crystalline fraction is lower in the aggregate than is derived by fitting a linear mix of spectral features from discrete subcomponents, and the crystal resonances 'appear' to be from larger crystals (Wooden et al. 2012). These results may indicate that the crystalline mass fraction in comets with comae dominated by aggregates may be lower than deduced by popular methods that only emoy ensembles of discrete crystals.
ATHENA 3D: A finite element code for ultrasonic wave propagation
NASA Astrophysics Data System (ADS)
Rose, C.; Rupin, F.; Fouquet, T.; Chassignole, B.
2014-04-01
The understanding of wave propagation phenomena requires use of robust numerical models. 3D finite element (FE) models are generally prohibitively time consuming. However, advances in computing processor speed and memory allow them to be more and more competitive. In this context, EDF R&D developed the 3D version of the well-validated FE code ATHENA2D. The code is dedicated to the simulation of wave propagation in all kinds of elastic media and in particular, heterogeneous and anisotropic materials like welds. It is based on solving elastodynamic equations in the calculation zone expressed in terms of stress and particle velocities. The particularity of the code relies on the fact that the discretization of the calculation domain uses a Cartesian regular 3D mesh while the defect of complex geometry can be described using a separate (2D) mesh using the fictitious domains method. This allows combining the rapidity of regular meshes computation with the capability of modelling arbitrary shaped defects. Furthermore, the calculation domain is discretized with a quasi-explicit time evolution scheme. Thereby only local linear systems of small size have to be solved. The final step to reduce the computation time relies on the fact that ATHENA3D has been parallelized and adapted to the use of HPC resources. In this paper, the validation of the 3D FE model is discussed. A cross-validation of ATHENA 3D and CIVA is proposed for several inspection configurations. The performances in terms of calculation time are also presented in the cases of both local computer and computation cluster use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herbold, E. B.; Walton, O.; Homel, M. A.
2015-10-26
This document serves as a final report to a small effort where several improvements were added to a LLNL code GEODYN-L to develop Discrete Element Method (DEM) algorithms coupled to Lagrangian Finite Element (FE) solvers to investigate powder-bed formation problems for additive manufacturing. The results from these simulations will be assessed for inclusion as the initial conditions for Direct Metal Laser Sintering (DMLS) simulations performed with ALE3D. The algorithms were written and performed on parallel computing platforms at LLNL. The total funding level was 3-4 weeks of an FTE split amongst two staff scientists and one post-doc. The DEM simulationsmore » emulated, as much as was feasible, the physical process of depositing a new layer of powder over a bed of existing powder. The DEM simulations utilized truncated size distributions spanning realistic size ranges with a size distribution profile consistent with realistic sample set. A minimum simulation sample size on the order of 40-particles square by 10-particles deep was utilized in these scoping studies in order to evaluate the potential effects of size segregation variation with distance displaced in front of a screed blade. A reasonable method for evaluating the problem was developed and validated. Several simulations were performed to show the viability of the approach. Future investigations will focus on running various simulations investigating powder particle sizing and screen geometries.« less
The giant impact produced a precipitated Moon
NASA Astrophysics Data System (ADS)
Cameron, A. G. W.
1993-03-01
The author's current simulations of Giant Impacts on the protoearth show the development of large hot rock vapor atmospheres. The Balbus-Hawley mechanism will pump mass and angular momentum outwards in the equatorial plane; upon cooling and expansion the rock vapor will condense refractory material beyond the Roche distance, where it is available for lunar formation. During the last seven years, the author together with several colleagues has carried out a series of numerical investigations of the Giant Impact theory for the origin of the Moon. These involved three-dimensional simulations of the impact and its aftermath using Smooth Particle Hydrodynamics (SPH), in which the matter in the system is divided into discrete particles whose motions and internal energies are determined as a result of the imposed initial conditions. Densities and pressures are determined from the combined overlaps of the particles, which have a bell-shaped density distribution characterized by a smoothing length. In the original series of runs all particle masses and smoothing lengths had the same values; the matter in the colliding bodies consisted of initial iron cores and rock (dunite) mantles. Each of 41 runs used 3,008 particles, took several weeks of continuous computation, and gave fairly good representations of the ultimate state of the post-collision body or bodies but at best crude and qualitative information about individual particles in orbit. During the last two years an improved SPH program was used in which the masses and smoothing lengths of the particles are variable, and the intent of the current series of computations is to investigate the behavior of the matter exterior to the main parts of the body or bodies subsequent to the collisions. These runs are taking times comparable to a year of continuous computation in each case; they use 10,000 particles with 5,000 particles in the target and 5,000 in the impactor, and the particles thus have variable masses and smoothing lengths (the latter are dynamically adjusted so that a particle typically overlaps a few tens of its neighbors). Since the matter in the impactor provides the majority of the mass left in orbit after the collision, and since the masses of the particles that originated in the impactor are smaller than those in the target, the mass resolution in the exterior parts of the problem is greatly improved and the exterior particles properly simulate atmospheres in hydrostatic equilibrium.
Modeling of particle agglomeration in nanofluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishna, K. Hari; Neti, S.; Oztekin, A.
2015-03-07
Agglomeration strongly influences the stability or shelf life of nanofluid. The present computational and experimental study investigates the rate of agglomeration quantitatively. Agglomeration in nanofluids is attributed to the net effect of various inter-particle interaction forces. For the nanofluid considered here, a net inter-particle force depends on the particle size, volume fraction, pH, and electrolyte concentration. A solution of the discretized and coupled population balance equations can yield particle sizes as a function of time. Nanofluid prepared here consists of alumina nanoparticles with the average particle size of 150 nm dispersed in de-ionized water. As the pH of the colloid wasmore » moved towards the isoelectric point of alumina nanofluids, the rate of increase of average particle size increased with time due to lower net positive charge on particles. The rate at which the average particle size is increased is predicted and measured for different electrolyte concentration and volume fraction. The higher rate of agglomeration is attributed to the decrease in the electrostatic double layer repulsion forces. The rate of agglomeration decreases due to increase in the size of nano-particle clusters thus approaching zero rate of agglomeration when all the clusters are nearly uniform in size. Predicted rates of agglomeration agree adequate enough with the measured values; validating the mathematical model and numerical approach is employed.« less
Bolintineanu, Dan S.; Rao, Rekha R.; Lechman, Jeremy B.; ...
2017-11-05
Here, we generate a wide range of models of proppant-packed fractures using discrete element simulations, and measure fracture conductivity using finite element flow simulations. This allows for a controlled computational study of proppant structure and its relationship to fracture conductivity and stress in the proppant pack. For homogeneous multi-layered packings, we observe the expected increase in fracture conductivity with increasing fracture aperture, while the stress on the proppant pack remains nearly constant. This is consistent with the expected behavior in conventional proppant-packed fractures, but the present work offers a novel quantitative analysis with an explicit geometric representation of the proppantmore » particles. In single-layered packings (i.e. proppant monolayers), there is a drastic increase in fracture conductivity as the proppant volume fraction decreases and open flow channels form. However, this also corresponds to a sharp increase in the mechanical stress on the proppant pack, as measured by the maximum normal stress relative to the side crushing strength of typical proppant particles. We also generate a variety of computational geometries that resemble highly heterogeneous proppant packings hypothesized to form during channel fracturing. In some cases, these heterogeneous packings show drastic improvements in conductivity with only moderate increase in the stress on the proppant particles, suggesting that in certain applications these structures are indeed optimal. We also compare our computer-generated structures to micro computed tomography imaging of a manually fractured laboratory-scale shale specimen, and find reasonable agreement in the geometric characteristics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolintineanu, Dan S.; Rao, Rekha R.; Lechman, Jeremy B.
Here, we generate a wide range of models of proppant-packed fractures using discrete element simulations, and measure fracture conductivity using finite element flow simulations. This allows for a controlled computational study of proppant structure and its relationship to fracture conductivity and stress in the proppant pack. For homogeneous multi-layered packings, we observe the expected increase in fracture conductivity with increasing fracture aperture, while the stress on the proppant pack remains nearly constant. This is consistent with the expected behavior in conventional proppant-packed fractures, but the present work offers a novel quantitative analysis with an explicit geometric representation of the proppantmore » particles. In single-layered packings (i.e. proppant monolayers), there is a drastic increase in fracture conductivity as the proppant volume fraction decreases and open flow channels form. However, this also corresponds to a sharp increase in the mechanical stress on the proppant pack, as measured by the maximum normal stress relative to the side crushing strength of typical proppant particles. We also generate a variety of computational geometries that resemble highly heterogeneous proppant packings hypothesized to form during channel fracturing. In some cases, these heterogeneous packings show drastic improvements in conductivity with only moderate increase in the stress on the proppant particles, suggesting that in certain applications these structures are indeed optimal. We also compare our computer-generated structures to micro computed tomography imaging of a manually fractured laboratory-scale shale specimen, and find reasonable agreement in the geometric characteristics.« less
An LES-PBE-PDF approach for modeling particle formation in turbulent reacting flows
NASA Astrophysics Data System (ADS)
Sewerin, Fabian; Rigopoulos, Stelios
2017-10-01
Many chemical and environmental processes involve the formation of a polydispersed particulate phase in a turbulent carrier flow. Frequently, the immersed particles are characterized by an intrinsic property such as the particle size, and the distribution of this property across a sample population is taken as an indicator for the quality of the particulate product or its environmental impact. In the present article, we propose a comprehensive model and an efficient numerical solution scheme for predicting the evolution of the property distribution associated with a polydispersed particulate phase forming in a turbulent reacting flow. Here, the particulate phase is described in terms of the particle number density whose evolution in both physical and particle property space is governed by the population balance equation (PBE). Based on the concept of large eddy simulation (LES), we augment the existing LES-transported probability density function (PDF) approach for fluid phase scalars by the particle number density and obtain a modeled evolution equation for the filtered PDF associated with the instantaneous fluid composition and particle property distribution. This LES-PBE-PDF approach allows us to predict the LES-filtered fluid composition and particle property distribution at each spatial location and point in time without any restriction on the chemical or particle formation kinetics. In view of a numerical solution, we apply the method of Eulerian stochastic fields, invoking an explicit adaptive grid technique in order to discretize the stochastic field equation for the number density in particle property space. In this way, sharp moving features of the particle property distribution can be accurately resolved at a significantly reduced computational cost. As a test case, we consider the condensation of an aerosol in a developed turbulent mixing layer. Our investigation not only demonstrates the predictive capabilities of the LES-PBE-PDF model but also indicates the computational efficiency of the numerical solution scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spellings, Matthew; Biointerfaces Institute, University of Michigan, 2800 Plymouth Rd., Ann Arbor, MI 48109; Marson, Ryan L.
Faceted shapes, such as polyhedra, are commonly found in systems of nanoscale, colloidal, and granular particles. Many interesting physical phenomena, like crystal nucleation and growth, vacancy motion, and glassy dynamics are challenging to model in these systems because they require detailed dynamical information at the individual particle level. Within the granular materials community the Discrete Element Method has been used extensively to model systems of anisotropic particles under gravity, with friction. We provide an implementation of this method intended for simulation of hard, faceted nanoparticles, with a conservative Weeks–Chandler–Andersen (WCA) interparticle potential, coupled to a thermodynamic ensemble. This method ismore » a natural extension of classical molecular dynamics and enables rigorous thermodynamic calculations for faceted particles.« less
Efficient Conservative Reformulation Schemes for Lithium Intercalation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urisanga, PC; Rife, D; De, S
Porous electrode theory coupled with transport and reaction mechanisms is a widely used technique to model Li-ion batteries employing an appropriate discretization or approximation for solid phase diffusion with electrode particles. One of the major difficulties in simulating Li-ion battery models is the need to account for solid phase diffusion in a second radial dimension r, which increases the computation time/cost to a great extent. Various methods that reduce the computational cost have been introduced to treat this phenomenon, but most of them do not guarantee mass conservation. The aim of this paper is to introduce an inherently mass conservingmore » yet computationally efficient method for solid phase diffusion based on Lobatto III A quadrature. This paper also presents coupling of the new solid phase reformulation scheme with a macro-homogeneous porous electrode theory based pseudo 20 model for Li-ion battery. (C) The Author(s) 2015. Published by ECS. All rights reserved.« less
Perspective: Ring-polymer instanton theory
NASA Astrophysics Data System (ADS)
Richardson, Jeremy O.
2018-05-01
Since the earliest explorations of quantum mechanics, it has been a topic of great interest that quantum tunneling allows particles to penetrate classically insurmountable barriers. Instanton theory provides a simple description of these processes in terms of dominant tunneling pathways. Using a ring-polymer discretization, an efficient computational method is obtained for applying this theory to compute reaction rates and tunneling splittings in molecular systems. Unlike other quantum-dynamics approaches, the method scales well with the number of degrees of freedom, and for many polyatomic systems, the method may provide the most accurate predictions which can be practically computed. Instanton theory thus has the capability to produce useful data for many fields of low-temperature chemistry including spectroscopy, atmospheric and astrochemistry, as well as surface science. There is however still room for improvement in the efficiency of the numerical algorithms, and new theories are under development for describing tunneling in nonadiabatic transitions.
NASA Astrophysics Data System (ADS)
Chang, Shanshan; Zhu, Zhengping; Ni, Binbin; Cao, Xing; Luo, Weihua
2016-10-01
Several extremely low-frequency (ELF)/very low-frequency (VLF) wave generation experiments have been performed successfully at High-Frequency Active Auroral Research Program (HAARP) heating facility and the artificial ELF/VLF signals can leak into the outer radiation belt and contribute to resonant interactions with energetic electrons. Based on the artificial wave properties revealed by many of in situ observations, we implement test particle simulations to evaluate the effects of energetic electron resonant scattering driven by the HAARP-induced ELF/VLF waves. The results indicate that for both single-frequency/monotonic wave and multi-frequency/broadband waves, the behavior of each electron is stochastic while the averaged diffusion effect exhibits temporal linearity in the wave-particle interaction process. The computed local diffusion coefficients show that, the local pitch-angle scattering due to HARRP-induced single-frequency ELF/VLF whistlers with an amplitude of ∼10 pT can be intense near the loss cone with a rate of ∼10-2 rad2 s-1, suggesting the feasibility of HAARP-induced ELF/VLF waves for removal of outer radiation belt energetic electrons. In contrast, the energy diffusion of energetic electrons is relatively weak, which confirms that pitch-angle scattering by artificial ELF/VLF waves can dominantly lead to the precipitation of energetic electrons. Moreover, diffusion rates of the discrete, broadband waves, with the same amplitude of each discrete frequency as the monotonic waves, can be much larger, which suggests that it is feasible to trigger a reasonable broadband wave instead of the monotonic wave to achieve better performance of controlled precipitation of energetic electrons. Moreover, our test particle scattering simulation show good agreement with the predictions of the quasi-linear theory, confirming that both methods are applied to evaluate the effects of resonant interactions between radiation belt electrons and artificially generated discrete ELF/VLF waves.
Kinetic solvers with adaptive mesh in phase space
NASA Astrophysics Data System (ADS)
Arslanbekov, Robert R.; Kolobov, Vladimir I.; Frolova, Anna A.
2013-12-01
An adaptive mesh in phase space (AMPS) methodology has been developed for solving multidimensional kinetic equations by the discrete velocity method. A Cartesian mesh for both configuration (r) and velocity (v) spaces is produced using a “tree of trees” (ToT) data structure. The r mesh is automatically generated around embedded boundaries, and is dynamically adapted to local solution properties. The v mesh is created on-the-fly in each r cell. Mappings between neighboring v-space trees is implemented for the advection operator in r space. We have developed algorithms for solving the full Boltzmann and linear Boltzmann equations with AMPS. Several recent innovations were used to calculate the discrete Boltzmann collision integral with dynamically adaptive v mesh: the importance sampling, multipoint projection, and variance reduction methods. We have developed an efficient algorithm for calculating the linear Boltzmann collision integral for elastic and inelastic collisions of hot light particles in a Lorentz gas. Our AMPS technique has been demonstrated for simulations of hypersonic rarefied gas flows, ion and electron kinetics in weakly ionized plasma, radiation and light-particle transport through thin films, and electron streaming in semiconductors. We have shown that AMPS allows minimizing the number of cells in phase space to reduce the computational cost and memory usage for solving challenging kinetic problems.
Kinetic solvers with adaptive mesh in phase space.
Arslanbekov, Robert R; Kolobov, Vladimir I; Frolova, Anna A
2013-12-01
An adaptive mesh in phase space (AMPS) methodology has been developed for solving multidimensional kinetic equations by the discrete velocity method. A Cartesian mesh for both configuration (r) and velocity (v) spaces is produced using a "tree of trees" (ToT) data structure. The r mesh is automatically generated around embedded boundaries, and is dynamically adapted to local solution properties. The v mesh is created on-the-fly in each r cell. Mappings between neighboring v-space trees is implemented for the advection operator in r space. We have developed algorithms for solving the full Boltzmann and linear Boltzmann equations with AMPS. Several recent innovations were used to calculate the discrete Boltzmann collision integral with dynamically adaptive v mesh: the importance sampling, multipoint projection, and variance reduction methods. We have developed an efficient algorithm for calculating the linear Boltzmann collision integral for elastic and inelastic collisions of hot light particles in a Lorentz gas. Our AMPS technique has been demonstrated for simulations of hypersonic rarefied gas flows, ion and electron kinetics in weakly ionized plasma, radiation and light-particle transport through thin films, and electron streaming in semiconductors. We have shown that AMPS allows minimizing the number of cells in phase space to reduce the computational cost and memory usage for solving challenging kinetic problems.
A Study of the Use of a Handheld Computer Algebra System in Discrete Mathematics
ERIC Educational Resources Information Center
Powers, Robert A.; Allison, Dean E.; Grassl, Richard M.
2005-01-01
This study investigated the impact of the TI-92 handheld Computer Algebra System (CAS) on student achievement in a discrete mathematics course. Specifically, the researchers examined the differences between a CAS section and a control section of discrete mathematics on students' in-class examinations. Additionally, they analysed student approaches…
Effects of absorption on multiple scattering by random particulate media: exact results.
Mishchenko, Michael I; Liu, Li; Hovenier, Joop W
2007-10-01
We employ the numerically exact superposition T-matrix method to perform extensive computations of elec nottromagnetic scattering by a volume of discrete random medium densely filled with increasingly absorbing as well as non-absorbing particles. Our numerical data demonstrate that increasing absorption diminishes and nearly extinguishes certain optical effects such as depolarization and coherent backscattering and increases the angular width of coherent backscattering patterns. This result corroborates the multiple-scattering origin of such effects and further demonstrates the heuristic value of the concept of multiple scattering even in application to densely packed particulate media.
Algorithm refinement for stochastic partial differential equations: II. Correlated systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, Francis J.; Garcia, Alejandro L.; Tartakovsky, Daniel M.
2005-08-10
We analyze a hybrid particle/continuum algorithm for a hydrodynamic system with long ranged correlations. Specifically, we consider the so-called train model for viscous transport in gases, which is based on a generalization of the random walk process for the diffusion of momentum. This discrete model is coupled with its continuous counterpart, given by a pair of stochastic partial differential equations. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass and momentum conservation. This methodology is an extension of our stochastic Algorithm Refinement (AR) hybrid for simple diffusion [F. Alexander, A. Garcia,more » D. Tartakovsky, Algorithm refinement for stochastic partial differential equations: I. Linear diffusion, J. Comput. Phys. 182 (2002) 47-66]. Results from a variety of numerical experiments are presented for steady-state scenarios. In all cases the mean and variance of density and velocity are captured correctly by the stochastic hybrid algorithm. For a non-stochastic version (i.e., using only deterministic continuum fluxes) the long-range correlations of velocity fluctuations are qualitatively preserved but at reduced magnitude.« less
Ocean Wave Simulation Based on Wind Field
2016-01-01
Ocean wave simulation has a wide range of applications in movies, video games and training systems. Wind force is the main energy resource for generating ocean waves, which are the result of the interaction between wind and the ocean surface. While numerous methods to handle simulating oceans and other fluid phenomena have undergone rapid development during the past years in the field of computer graphic, few of them consider to construct ocean surface height field from the perspective of wind force driving ocean waves. We introduce wind force to the construction of the ocean surface height field through applying wind field data and wind-driven wave particles. Continual and realistic ocean waves result from the overlap of wind-driven wave particles, and a strategy was proposed to control these discrete wave particles and simulate an endless ocean surface. The results showed that the new method is capable of obtaining a realistic ocean scene under the influence of wind fields at real time rates. PMID:26808718
Ocean Wave Simulation Based on Wind Field.
Li, Zhongyi; Wang, Hao
2016-01-01
Ocean wave simulation has a wide range of applications in movies, video games and training systems. Wind force is the main energy resource for generating ocean waves, which are the result of the interaction between wind and the ocean surface. While numerous methods to handle simulating oceans and other fluid phenomena have undergone rapid development during the past years in the field of computer graphic, few of them consider to construct ocean surface height field from the perspective of wind force driving ocean waves. We introduce wind force to the construction of the ocean surface height field through applying wind field data and wind-driven wave particles. Continual and realistic ocean waves result from the overlap of wind-driven wave particles, and a strategy was proposed to control these discrete wave particles and simulate an endless ocean surface. The results showed that the new method is capable of obtaining a realistic ocean scene under the influence of wind fields at real time rates.
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Dearden, Richard; Benazera, Emmanuel
2004-01-01
Fault detection and isolation are critical tasks to ensure correct operation of systems. When we consider stochastic hybrid systems, diagnosis algorithms need to track both the discrete mode and the continuous state of the system in the presence of noise. Deterministic techniques like Livingstone cannot deal with the stochasticity in the system and models. Conversely Bayesian belief update techniques such as particle filters may require many computational resources to get a good approximation of the true belief state. In this paper we propose a fault detection and isolation architecture for stochastic hybrid systems that combines look-ahead Rao-Blackwellized Particle Filters (RBPF) with the Livingstone 3 (L3) diagnosis engine. In this approach RBPF is used to track the nominal behavior, a novel n-step prediction scheme is used for fault detection and L3 is used to generate a set of candidates that are consistent with the discrepant observations which then continue to be tracked by the RBPF scheme.
Radiative transfer modeling applied to sea water constituent determination. [Gulf of Mexico
NASA Technical Reports Server (NTRS)
Faller, K. H.
1979-01-01
Optical radiation from the sea is influenced by pigments dissolved in the water and contained in discrete organisms suspended in the sea, and by pigmented and unpigmented inorganic and organic particles. The problem of extracting the information concerning these pigments and particulates from the optical properties of the sea is addressed and the properties which determine characteristics of the radiation that a remote sensor will detect and measure are considered. The results of the application of the volume scattering function model to the data collected in the Gulf of Mexico and its environs indicate that the size distribution of the concentrations of particles found in the sea can be predicted from measurements of the volume scattering function. Furthermore, with the volume scattering function model and knowledge of the absorption spectra of dissolved pigments, the radiative transfer model can compute a distribution of particle sizes and indices of refraction and concentration of dissolved pigments that give an upwelling light spectrum that closely matches measurements of that spectrum at sea.
A Local-Realistic Model of Quantum Mechanics Based on a Discrete Spacetime
NASA Astrophysics Data System (ADS)
Sciarretta, Antonio
2018-01-01
This paper presents a realistic, stochastic, and local model that reproduces nonrelativistic quantum mechanics (QM) results without using its mathematical formulation. The proposed model only uses integer-valued quantities and operations on probabilities, in particular assuming a discrete spacetime under the form of a Euclidean lattice. Individual (spinless) particle trajectories are described as random walks. Transition probabilities are simple functions of a few quantities that are either randomly associated to the particles during their preparation, or stored in the lattice nodes they visit during the walk. QM predictions are retrieved as probability distributions of similarly-prepared ensembles of particles. The scenarios considered to assess the model comprise of free particle, constant external force, harmonic oscillator, particle in a box, the Delta potential, particle on a ring, particle on a sphere and include quantization of energy levels and angular momentum, as well as momentum entanglement.
Calantoni, Joseph; Holland, K Todd; Drake, Thomas G
2004-09-15
Sediment transport in oscillatory boundary layers is a process that drives coastal geomorphological change. Most formulae for bed-load transport in nearshore regions subsume the smallest-scale physics of the phenomena by parametrizing interactions amongst particles. In contrast, we directly simulate granular physics in the wave-bottom boundary layer using a discrete-element model comprised of a three-dimensional particle phase coupled to a one-dimensional fluid phase via Newton's third law through forces of buoyancy, drag and added mass. The particulate sediment phase is modelled using discrete particles formed to approximate natural grains by overlapping two spheres. Both the size of each sphere and the degree of overlap can be varied for these composite particles to generate a range of non-spherical grains. Simulations of particles having a range of shapes showed that the critical angle--the angle at which a grain pile will fail when tilted slowly from rest--increases from approximately 26 degrees for spherical particles to nearly 39 degrees for highly non-spherical composite particles having a dumbbell shape. Simulations of oscillatory sheet flow were conducted using composite particles with an angle of repose of approximately 33 degrees and a Corey shape factor greater than about 0.8, similar to the properties of beach sand. The results from the sheet-flow simulations with composite particles agreed more closely with laboratory measurements than similar simulations conducted using spherical particles. The findings suggest that particle shape may be an important factor for determining bed-load flux, particularly for larger bed slopes.
Shear of ordinary and elongated granular mixtures
NASA Astrophysics Data System (ADS)
Hensley, Alexander; Kern, Matthew; Marschall, Theodore; Teitel, Stephen; Franklin, Scott
2015-03-01
We present an experimental and computational study of a mixture of discs and moderate aspect-ratio ellipses under two-dimensional annular planar Couette shear. Experimental particles are cut from acrylic sheet, are essentially incompressible, and constrained in the thin gap between two concentric cylinders. The annular radius of curvature is much larger than the particles, and so the experiment is quasi-2d and allows for arbitrarily large pure-shear strains. Synchronized video cameras and software identify all particles and track them as they move from the field of view of one camera to another. We are particularly interested in the global and local properties as the mixture ratio of discs to ellipses varies. Global quantities include average shear rate and distribution of particle species as functions of height, while locally we investigate the orientation of the ellipses and non-affine events that can be characterized as shear transformational zones or possess a quadrupole signature observed previously in systems of purely circular particles. Discrete Element Method simulations on mixtures of circles and spherocylinders extend the study to the dynamics of the force network and energy dissipated as the system evolves. Supported by NSF CBET #1243571 and PRF #51438-UR10.
Modeling Optical Properties of Mineral Aerosol Particles by Using Nonsymmetric Hexahedra
NASA Technical Reports Server (NTRS)
Bi, Lei; Yang, Ping; Kattawar, George W.; Kahn, Ralph
2010-01-01
We explore the use of nonsymmetric geometries to simulate the single-scattering properties of airborne dust particles with complicated morphologies. Specifically, the shapes of irregular dust particles are assumed to be nonsymmetric hexahedra defined by using the Monte Carlo method. A combination of the discrete dipole approximation method and an improved geometric optics method is employed to compute the single-scattering properties of dust particles for size parameters ranging from 0.5 to 3000. The primary optical effect of eliminating the geometric symmetry of regular hexahedra is to smooth the scattering features in the phase function and to decrease the backscatter. The optical properties of the nonsymmetric hexahedra are used to mimic the laboratory measurements. It is demonstrated that a relatively close agreement can be achieved by using only one shape of nonsymmetric hexahedra. The agreement between the theoretical results and their measurement counterparts can be further improved by using a mixture of nonsymmetric hexahedra. It is also shown that the hexahedron model is much more appropriate than the "equivalent sphere" model for simulating the optical properties of dust particles, particularly, in the case of the elements of the phase matrix that associated with the polarization state of scattered light.
SU-E-T-22: A Deterministic Solver of the Boltzmann-Fokker-Planck Equation for Dose Calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, X; Gao, H; Paganetti, H
2015-06-15
Purpose: The Boltzmann-Fokker-Planck equation (BFPE) accurately models the migration of photons/charged particles in tissues. While the Monte Carlo (MC) method is popular for solving BFPE in a statistical manner, we aim to develop a deterministic BFPE solver based on various state-of-art numerical acceleration techniques for rapid and accurate dose calculation. Methods: Our BFPE solver is based on the structured grid that is maximally parallelizable, with the discretization in energy, angle and space, and its cross section coefficients are derived or directly imported from the Geant4 database. The physical processes that are taken into account are Compton scattering, photoelectric effect, pairmore » production for photons, and elastic scattering, ionization and bremsstrahlung for charged particles.While the spatial discretization is based on the diamond scheme, the angular discretization synergizes finite element method (FEM) and spherical harmonics (SH). Thus, SH is used to globally expand the scattering kernel and FFM is used to locally discretize the angular sphere. As a Result, this hybrid method (FEM-SH) is both accurate in dealing with forward-peaking scattering via FEM, and efficient for multi-energy-group computation via SH. In addition, FEM-SH enables the analytical integration in energy variable of delta scattering kernel for elastic scattering with reduced truncation error from the numerical integration based on the classic SH-based multi-energy-group method. Results: The accuracy of the proposed BFPE solver was benchmarked against Geant4 for photon dose calculation. In particular, FEM-SH had improved accuracy compared to FEM, while both were within 2% of the results obtained with Geant4. Conclusion: A deterministic solver of the Boltzmann-Fokker-Planck equation is developed for dose calculation, and benchmarked against Geant4. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
A homogenization-based quasi-discrete method for the fracture of heterogeneous materials
NASA Astrophysics Data System (ADS)
Berke, P. Z.; Peerlings, R. H. J.; Massart, T. J.; Geers, M. G. D.
2014-05-01
The understanding and the prediction of the failure behaviour of materials with pronounced microstructural effects is of crucial importance. This paper presents a novel computational methodology for the handling of fracture on the basis of the microscale behaviour. The basic principles presented here allow the incorporation of an adaptive discretization scheme of the structure as a function of the evolution of strain localization in the underlying microstructure. The proposed quasi-discrete methodology bridges two scales: the scale of the material microstructure, modelled with a continuum type description; and the structural scale, where a discrete description of the material is adopted. The damaging material at the structural scale is divided into unit volumes, called cells, which are represented as a discrete network of points. The scale transition is inspired by computational homogenization techniques; however it does not rely on classical averaging theorems. The structural discrete equilibrium problem is formulated in terms of the underlying fine scale computations. Particular boundary conditions are developed on the scale of the material microstructure to address damage localization problems. The performance of this quasi-discrete method with the enhanced boundary conditions is assessed using different computational test cases. The predictions of the quasi-discrete scheme agree well with reference solutions obtained through direct numerical simulations, both in terms of crack patterns and load versus displacement responses.
A priori discretization error metrics for distributed hydrologic modeling applications
NASA Astrophysics Data System (ADS)
Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar
2016-12-01
Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Otoguro, Yuto
2018-04-01
Stabilized methods, which have been very common in flow computations for many years, typically involve stabilization parameters, and discontinuity-capturing (DC) parameters if the method is supplemented with a DC term. Various well-performing stabilization and DC parameters have been introduced for stabilized space-time (ST) computational methods in the context of the advection-diffusion equation and the Navier-Stokes equations of incompressible and compressible flows. These parameters were all originally intended for finite element discretization but quite often used also for isogeometric discretization. The stabilization and DC parameters we present here for ST computations are in the context of the advection-diffusion equation and the Navier-Stokes equations of incompressible flows, target isogeometric discretization, and are also applicable to finite element discretization. The parameters are based on a direction-dependent element length expression. The expression is outcome of an easy to understand derivation. The key components of the derivation are mapping the direction vector from the physical ST element to the parent ST element, accounting for the discretization spacing along each of the parametric coordinates, and mapping what we have in the parent element back to the physical element. The test computations we present for pure-advection cases show that the parameters proposed result in good solution profiles.
NASA Astrophysics Data System (ADS)
Olmos, L.; Bouvard, D.; Martin, C. L.; Bellet, D.; Di Michiel, M.
2009-06-01
The sintering of both a powder with a wide particle size distribution (0-63 μm) and of a powder with artificially created pores is investigated by coupling in situ X-ray microtomography observations with Discrete Element simulations. The micro structure evolution of the copper particles is observed by microtomography all along a typical sintering cycle at 1050° C at the European Synchrotron Research Facilities (ESRF, Grenoble, France). A quantitative analysis of the 3D images provides original data on interparticle indentation, coordination and particle displacements throughout sintering. In parallel, the sintering of similar powder systems has been simulated with a discrete element code which incorporates appropriate sintering contact laws from the literature. The initial numerical packing is generated directly from the 3D microtomography images or alternatively from a random set of particles with the same size distribution. The comparison between the information drawn from the simulations and the one obtained by tomography leads to the conclusion that the first method is not satisfactory because real particles are not perfectly spherical as the numerical ones. On the opposite the packings built with the second method show sintering behaviors close to the behaviors of real materials, although particle rearrangement is underestimated by DEM simulations.
GEMPIC: geometric electromagnetic particle-in-cell methods
NASA Astrophysics Data System (ADS)
Kraus, Michael; Kormann, Katharina; Morrison, Philip J.; Sonnendrücker, Eric
2017-08-01
We present a novel framework for finite element particle-in-cell methods based on the discretization of the underlying Hamiltonian structure of the Vlasov-Maxwell system. We derive a semi-discrete Poisson bracket, which retains the defining properties of a bracket, anti-symmetry and the Jacobi identity, as well as conservation of its Casimir invariants, implying that the semi-discrete system is still a Hamiltonian system. In order to obtain a fully discrete Poisson integrator, the semi-discrete bracket is used in conjunction with Hamiltonian splitting methods for integration in time. Techniques from finite element exterior calculus ensure conservation of the divergence of the magnetic field and Gauss' law as well as stability of the field solver. The resulting methods are gauge invariant, feature exact charge conservation and show excellent long-time energy and momentum behaviour. Due to the generality of our framework, these conservation properties are guaranteed independently of a particular choice of the finite element basis, as long as the corresponding finite element spaces satisfy certain compatibility conditions.
A general spectral method for the numerical simulation of one-dimensional interacting fermions
NASA Astrophysics Data System (ADS)
Clason, Christian; von Winckel, Gregory
2012-08-01
This software implements a general framework for the direct numerical simulation of systems of interacting fermions in one spatial dimension. The approach is based on a specially adapted nodal spectral Galerkin method, where the basis functions are constructed to obey the antisymmetry relations of fermionic wave functions. An efficient Matlab program for the assembly of the stiffness and potential matrices is presented, which exploits the combinatorial structure of the sparsity pattern arising from this discretization to achieve optimal run-time complexity. This program allows the accurate discretization of systems with multiple fermions subject to arbitrary potentials, e.g., for verifying the accuracy of multi-particle approximations such as Hartree-Fock in the few-particle limit. It can be used for eigenvalue computations or numerical solutions of the time-dependent Schrödinger equation. The new version includes a Python implementation of the presented approach. New version program summaryProgram title: assembleFermiMatrix Catalogue identifier: AEKO_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKO_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 332 No. of bytes in distributed program, including test data, etc.: 5418 Distribution format: tar.gz Programming language: MATLAB/GNU Octave, Python Computer: Any architecture supported by MATLAB, GNU Octave or Python Operating system: Any supported by MATLAB, GNU Octave or Python RAM: Depends on the data Classification: 4.3, 2.2. External routines: Python 2.7+, NumPy 1.3+, SciPy 0.10+ Catalogue identifier of previous version: AEKO_v1_0 Journal reference of previous version: Comput. Phys. Commun. 183 (2012) 405 Does the new version supersede the previous version?: Yes Nature of problem: The direct numerical solution of the multi-particle one-dimensional Schrödinger equation in a quantum well is challenging due to the exponential growth in the number of degrees of freedom with increasing particles. Solution method: A nodal spectral Galerkin scheme is used where the basis functions are constructed to obey the antisymmetry relations of the fermionic wave function. The assembly of these matrices is performed efficiently by exploiting the combinatorial structure of the sparsity patterns. Reasons for new version: A Python implementation is now included. Summary of revisions: Added a Python implementation; small documentation fixes in Matlab implementation. No change in features of the package. Restrictions: Only one-dimensional computational domains with homogeneous Dirichlet or periodic boundary conditions are supported. Running time: Seconds to minutes.
Computation of Discrete Slanted Hole Film Cooling Flow Using the Navier-Stokes Equations.
1982-07-01
7 -121 796 COMPUTATION OF DISCRETE SLANTED HOLE FILM COOLING FLOW i/ i USING THE NAVIER- ..(U) CIENTIFIC RESEARCH ASSOCIATES INC GLASTONBURY CT H...V U U6-IMSA P/ & .OS,-TR. 82-1004 Report R82-910002-4 / COMPUTATION OF DISCRETE SLAMED HOLE FILM COOLING FLOW ( USING THE XAVIER-STOKES EQUATIONS H...CL SIT %GE (f.en Dae Entere)04 REPORT DOCUMENTATION PAGE BEFORE COMPLETING FORM REPORT NUMBER 2. GOVT ACCESSION NO] S. RECIPIENT’S CATALOG NUMBERAO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tartakovsky, Alexandre M.; Trask, Nathaniel; Pan, K.
2016-03-11
Smoothed Particle Hydrodynamics (SPH) is a Lagrangian method based on a meshless discretization of partial differential equations. In this review, we present SPH discretization of the Navier-Stokes and Advection-Diffusion-Reaction equations, implementation of various boundary conditions, and time integration of the SPH equations, and we discuss applications of the SPH method for modeling pore-scale multiphase flows and reactive transport in porous and fractured media.
Variational Algorithms for Test Particle Trajectories
NASA Astrophysics Data System (ADS)
Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.
2015-11-01
The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.
Axion like particles and the inverse seesaw mechanism
Carvajal, C. D. R.; Dias, Alex G.; Nishi, C. C.; ...
2015-05-13
Light pseudoscalars known as axion like particles (ALPs) may be behind physical phenomena like the Universe transparency to ultra-energetic photons, the soft -ray excess from the Coma cluster, and the 3.5 keV line. We explore the connection of these particles with the inverse seesaw (ISS) mechanism for neutrino mass generation. We propose a very restrictive setting where the scalar field hosting the ALP is also responsible for generating the ISS mass scales through its vacuum expectation value on gravity induced nonrenormalizable operators. A discrete gauge symmetry protects the theory from the appearance of overly strong gravitational effects and discrete anomalymore » cancellation imposes strong constraints on the order of the group. In conclusion, the anomalous U(1) symmetry leading to the ALP is an extended lepton number and the protective discrete symmetry can be always chosen as a subgroup of a combination of the lepton number and the baryon number.« less
Estimation for general birth-death processes.
Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A
2014-04-01
Birth-death processes (BDPs) are continuous-time Markov chains that track the number of "particles" in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution.
All you need is shape: Predicting shear banding in sand with LS-DEM
NASA Astrophysics Data System (ADS)
Kawamoto, Reid; Andò, Edward; Viggiani, Gioacchino; Andrade, José E.
2018-02-01
This paper presents discrete element method (DEM) simulations with experimental comparisons at multiple length scales-underscoring the crucial role of particle shape. The simulations build on technological advances in the DEM furnished by level sets (LS-DEM), which enable the mathematical representation of the surface of arbitrarily-shaped particles such as grains of sand. We show that this ability to model shape enables unprecedented capture of the mechanics of granular materials across scales ranging from macroscopic behavior to local behavior to particle behavior. Specifically, the model is able to predict the onset and evolution of shear banding in sands, replicating the most advanced high-fidelity experiments in triaxial compression equipped with sequential X-ray tomography imaging. We present comparisons of the model and experiment at an unprecedented level of quantitative agreement-building a one-to-one model where every particle in the more than 53,000-particle array has its own avatar or numerical twin. Furthermore, the boundary conditions of the experiment are faithfully captured by modeling the membrane effect as well as the platen displacement and tilting. The results show a computational tool that can give insight into the physics and mechanics of granular materials undergoing shear deformation and failure, with computational times comparable to those of the experiment. One quantitative measure that is extracted from the LS-DEM simulations that is currently not available experimentally is the evolution of three dimensional force chains inside and outside of the shear band. We show that the rotations on the force chains are correlated to the rotations in stress principal directions.
SIGMA--A Graphical Approach to Teaching Simulation.
ERIC Educational Resources Information Center
Schruben, Lee W.
1992-01-01
SIGMA (Simulation Graphical Modeling and Analysis) is a computer graphics environment for building, testing, and experimenting with discrete event simulation models on personal computers. It uses symbolic representations (computer animation) to depict the logic of large, complex discrete event systems for easier understanding and has proven itself…
rpSPH: a novel smoothed particle hydrodynamics algorithm
NASA Astrophysics Data System (ADS)
Abel, Tom
2011-05-01
We suggest a novel discretization of the momentum equation for smoothed particle hydrodynamics (SPH) and show that it significantly improves the accuracy of the obtained solutions. Our new formulation which we refer to as relative pressure SPH, rpSPH, evaluates the pressure force with respect to the local pressure. It respects Newton's first law of motion and applies forces to particles only when there is a net force acting upon them. This is in contrast to standard SPH which explicitly uses Newton's third law of motion continuously applying equal but opposite forces between particles. rpSPH does not show the unphysical particle noise, the clumping or banding instability, unphysical surface tension and unphysical scattering of different mass particles found for standard SPH. At the same time, it uses fewer computational operations and only changes a single line in existing SPH codes. We demonstrate its performance on isobaric uniform density distributions, uniform density shearing flows, the Kelvin-Helmholtz and Rayleigh-Taylor instabilities, the Sod shock tube, the Sedov-Taylor blast wave and a cosmological integration of the Santa Barbara galaxy cluster formation test. rpSPH is an improvement in these cases. The improvements come at the cost of giving up exact momentum conservation of the scheme. Consequently, one can also obtain unphysical solutions particularly at low resolutions.
Discrete Ramanujan transform for distinguishing the protein coding regions from other regions.
Hua, Wei; Wang, Jiasong; Zhao, Jian
2014-01-01
Based on the study of Ramanujan sum and Ramanujan coefficient, this paper suggests the concepts of discrete Ramanujan transform and spectrum. Using Voss numerical representation, one maps a symbolic DNA strand as a numerical DNA sequence, and deduces the discrete Ramanujan spectrum of the numerical DNA sequence. It is well known that of discrete Fourier power spectrum of protein coding sequence has an important feature of 3-base periodicity, which is widely used for DNA sequence analysis by the technique of discrete Fourier transform. It is performed by testing the signal-to-noise ratio at frequency N/3 as a criterion for the analysis, where N is the length of the sequence. The results presented in this paper show that the property of 3-base periodicity can be only identified as a prominent spike of the discrete Ramanujan spectrum at period 3 for the protein coding regions. The signal-to-noise ratio for discrete Ramanujan spectrum is defined for numerical measurement. Therefore, the discrete Ramanujan spectrum and the signal-to-noise ratio of a DNA sequence can be used for distinguishing the protein coding regions from the noncoding regions. All the exon and intron sequences in whole chromosomes 1, 2, 3 and 4 of Caenorhabditis elegans have been tested and the histograms and tables from the computational results illustrate the reliability of our method. In addition, we have analyzed theoretically and gotten the conclusion that the algorithm for calculating discrete Ramanujan spectrum owns the lower computational complexity and higher computational accuracy. The computational experiments show that the technique by using discrete Ramanujan spectrum for classifying different DNA sequences is a fast and effective method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Phase computations and phase models for discrete molecular oscillators.
Suvak, Onder; Demir, Alper
2012-06-11
Biochemical oscillators perform crucial functions in cells, e.g., they set up circadian clocks. The dynamical behavior of oscillators is best described and analyzed in terms of the scalar quantity, phase. A rigorous and useful definition for phase is based on the so-called isochrons of oscillators. Phase computation techniques for continuous oscillators that are based on isochrons have been used for characterizing the behavior of various types of oscillators under the influence of perturbations such as noise. In this article, we extend the applicability of these phase computation methods to biochemical oscillators as discrete molecular systems, upon the information obtained from a continuous-state approximation of such oscillators. In particular, we describe techniques for computing the instantaneous phase of discrete, molecular oscillators for stochastic simulation algorithm generated sample paths. We comment on the accuracies and derive certain measures for assessing the feasibilities of the proposed phase computation methods. Phase computation experiments on the sample paths of well-known biological oscillators validate our analyses. The impact of noise that arises from the discrete and random nature of the mechanisms that make up molecular oscillators can be characterized based on the phase computation techniques proposed in this article. The concept of isochrons is the natural choice upon which the phase notion of oscillators can be founded. The isochron-theoretic phase computation methods that we propose can be applied to discrete molecular oscillators of any dimension, provided that the oscillatory behavior observed in discrete-state does not vanish in a continuous-state approximation. Analysis of the full versatility of phase noise phenomena in molecular oscillators will be possible if a proper phase model theory is developed, without resorting to such approximations.
Phase computations and phase models for discrete molecular oscillators
2012-01-01
Background Biochemical oscillators perform crucial functions in cells, e.g., they set up circadian clocks. The dynamical behavior of oscillators is best described and analyzed in terms of the scalar quantity, phase. A rigorous and useful definition for phase is based on the so-called isochrons of oscillators. Phase computation techniques for continuous oscillators that are based on isochrons have been used for characterizing the behavior of various types of oscillators under the influence of perturbations such as noise. Results In this article, we extend the applicability of these phase computation methods to biochemical oscillators as discrete molecular systems, upon the information obtained from a continuous-state approximation of such oscillators. In particular, we describe techniques for computing the instantaneous phase of discrete, molecular oscillators for stochastic simulation algorithm generated sample paths. We comment on the accuracies and derive certain measures for assessing the feasibilities of the proposed phase computation methods. Phase computation experiments on the sample paths of well-known biological oscillators validate our analyses. Conclusions The impact of noise that arises from the discrete and random nature of the mechanisms that make up molecular oscillators can be characterized based on the phase computation techniques proposed in this article. The concept of isochrons is the natural choice upon which the phase notion of oscillators can be founded. The isochron-theoretic phase computation methods that we propose can be applied to discrete molecular oscillators of any dimension, provided that the oscillatory behavior observed in discrete-state does not vanish in a continuous-state approximation. Analysis of the full versatility of phase noise phenomena in molecular oscillators will be possible if a proper phase model theory is developed, without resorting to such approximations. PMID:22687330
Imposing a Lagrangian Particle Framework on an Eulerian Hydrodynamics Infrastructure in Flash
NASA Technical Reports Server (NTRS)
Dubey, A.; Daley, C.; ZuHone, J.; Ricker, P. M.; Weide, K.; Graziani, C.
2012-01-01
In many astrophysical simulations, both Eulerian and Lagrangian quantities are of interest. For example, in a galaxy cluster merger simulation, the intracluster gas can have Eulerian discretization, while dark matter can be modeled using particles. FLASH, a component-based scientific simulation code, superimposes a Lagrangian framework atop an adaptive mesh refinement Eulerian framework to enable such simulations. The discretization of the field variables is Eulerian, while the Lagrangian entities occur in many different forms including tracer particles, massive particles, charged particles in particle-in-cell mode, and Lagrangian markers to model fluid structure interactions. These widely varying roles for Lagrangian entities are possible because of the highly modular, flexible, and extensible architecture of the Lagrangian framework. In this paper, we describe the Lagrangian framework in FLASH in the context of two very different applications, Type Ia supernovae and galaxy cluster mergers, which use the Lagrangian entities in fundamentally different ways.
Imposing a Lagrangian Particle Framework on an Eulerian Hydrodynamics Infrastructure in FLASH
NASA Astrophysics Data System (ADS)
Dubey, A.; Daley, C.; ZuHone, J.; Ricker, P. M.; Weide, K.; Graziani, C.
2012-08-01
In many astrophysical simulations, both Eulerian and Lagrangian quantities are of interest. For example, in a galaxy cluster merger simulation, the intracluster gas can have Eulerian discretization, while dark matter can be modeled using particles. FLASH, a component-based scientific simulation code, superimposes a Lagrangian framework atop an adaptive mesh refinement Eulerian framework to enable such simulations. The discretization of the field variables is Eulerian, while the Lagrangian entities occur in many different forms including tracer particles, massive particles, charged particles in particle-in-cell mode, and Lagrangian markers to model fluid-structure interactions. These widely varying roles for Lagrangian entities are possible because of the highly modular, flexible, and extensible architecture of the Lagrangian framework. In this paper, we describe the Lagrangian framework in FLASH in the context of two very different applications, Type Ia supernovae and galaxy cluster mergers, which use the Lagrangian entities in fundamentally different ways.
Simulating Soft Shadows with Graphics Hardware,
1997-01-15
This radiance texture is analogous to the mesh of radiosity values computed in a radiosity algorithm. Unlike a radiosity algorithm, however, our...discretely. Several researchers have explored continuous visibility methods for soft shadow computation and radiosity mesh generation. With this approach...times of several seconds [9]. Most radiosity methods discretize each surface into a mesh of elements and then use discrete methods such as ray
Center for Efficient Exascale Discretizations Software Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolev, Tzanio; Dobrev, Veselin; Tomov, Vladimir
The CEED Software suite is a collection of generally applicable software tools focusing on the following computational motives: PDE discretizations on unstructured meshes, high-order finite element and spectral element methods and unstructured adaptive mesh refinement. All of this software is being developed as part of CEED, a co-design Center for Efficient Exascale Discretizations, within DOE's Exascale Computing Project (ECP) program.
Doubled lattice Chern-Simons-Yang-Mills theories with discrete gauge group
NASA Astrophysics Data System (ADS)
Caspar, S.; Mesterházy, D.; Olesen, T. Z.; Vlasii, N. D.; Wiese, U.-J.
2016-11-01
We construct doubled lattice Chern-Simons-Yang-Mills theories with discrete gauge group G in the Hamiltonian formulation. Here, these theories are considered on a square spatial lattice and the fundamental degrees of freedom are defined on pairs of links from the direct lattice and its dual, respectively. This provides a natural lattice construction for topologically-massive gauge theories, which are invariant under parity and time-reversal symmetry. After defining the building blocks of the doubled theories, paying special attention to the realization of gauge transformations on quantum states, we examine the dynamics in the group space of a single cross, which is spanned by a single link and its dual. The dynamics is governed by the single-cross electric Hamiltonian and admits a simple quantum mechanical analogy to the problem of a charged particle moving on a discrete space affected by an abstract electromagnetic potential. Such a particle might accumulate a phase shift equivalent to an Aharonov-Bohm phase, which is manifested in the doubled theory in terms of a nontrivial ground-state degeneracy on a single cross. We discuss several examples of these doubled theories with different gauge groups including the cyclic group Z(k) ⊂ U(1) , the symmetric group S3 ⊂ O(2) , the binary dihedral (or quaternion) group D¯2 ⊂ SU(2) , and the finite group Δ(27) ⊂ SU(3) . In each case the spectrum of the single-cross electric Hamiltonian is determined exactly. We examine the nature of the low-lying excited states in the full Hilbert space, and emphasize the role of the center symmetry for the confinement of charges. Whether the investigated doubled models admit a non-Abelian topological state which allows for fault-tolerant quantum computation will be addressed in a future publication.
NASA Astrophysics Data System (ADS)
Asinari, Pietro
2010-10-01
The homogeneous isotropic Boltzmann equation (HIBE) is a fundamental dynamic model for many applications in thermodynamics, econophysics and sociodynamics. Despite recent hardware improvements, the solution of the Boltzmann equation remains extremely challenging from the computational point of view, in particular by deterministic methods (free of stochastic noise). This work aims to improve a deterministic direct method recently proposed [V.V. Aristov, Kluwer Academic Publishers, 2001] for solving the HIBE with a generic collisional kernel and, in particular, for taking care of the late dynamics of the relaxation towards the equilibrium. Essentially (a) the original problem is reformulated in terms of particle kinetic energy (exact particle number and energy conservation during microscopic collisions) and (b) the computation of the relaxation rates is improved by the DVM-like correction, where DVM stands for Discrete Velocity Model (ensuring that the macroscopic conservation laws are exactly satisfied). Both these corrections make possible to derive very accurate reference solutions for this test case. Moreover this work aims to distribute an open-source program (called HOMISBOLTZ), which can be redistributed and/or modified for dealing with different applications, under the terms of the GNU General Public License. The program has been purposely designed in order to be minimal, not only with regards to the reduced number of lines (less than 1000), but also with regards to the coding style (as simple as possible). Program summaryProgram title: HOMISBOLTZ Catalogue identifier: AEGN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 23 340 No. of bytes in distributed program, including test data, etc.: 7 635 236 Distribution format: tar.gz Programming language: Tested with Matlab version ⩽6.5. However, in principle, any recent version of Matlab or Octave should work Computer: All supporting Matlab or Octave Operating system: All supporting Matlab or Octave RAM: 300 MBytes Classification: 23 Nature of problem: The problem consists in integrating the homogeneous Boltzmann equation for a generic collisional kernel in case of isotropic symmetry, by a deterministic direct method. Difficulties arise from the multi-dimensionality of the collisional operator and from satisfying the conservation of particle number and energy (momentum is trivial for this test case) as accurately as possible, in order to preserve the late dynamics. Solution method: The solution is based on the method proposed by Aristov (2001) [1], but with two substantial improvements: (a) the original problem is reformulated in terms of particle kinetic energy (this allows one to ensure exact particle number and energy conservation during microscopic collisions) and (b) a DVM-like correction (where DVM stands for Discrete Velocity Model) is adopted for improving the relaxation rates (this allows one to satisfy exactly the conservation laws at macroscopic level, which is particularly important for describing the late dynamics in the relaxation towards the equilibrium). Both these corrections make possible to derive very accurate reference solutions for this test case. Restrictions: The nonlinear Boltzmann equation is extremely challenging from the computational point of view, in particular for deterministic methods, despite the increased computational power of recent hardware. In this work, only the homogeneous isotropic case is considered, for making possible the development of a minimal program (by a simple scripting language) and allowing the user to check the advantages of the proposed improvements beyond Aristov's (2001) method [1]. The initial conditions are supposed parameterized according to a fixed analytical expression, but this can be easily modified. Running time: From minutes to hours (depending on the adopted discretization of the kinetic energy space). For example, on a 64 bit workstation with Intel CoreTM i7-820Q Quad Core CPU at 1.73 GHz and 8 MBytes of RAM, the provided test run (with the corresponding binary data file storing the pre-computed relaxation rates) requires 154 seconds. References:V.V. Aristov, Direct Methods for Solving the Boltzmann Equation and Study of Nonequilibrium Flows, Kluwer Academic Publishers, 2001.
Simulation of granular and gas-solid flows using discrete element method
NASA Astrophysics Data System (ADS)
Boyalakuntla, Dhanunjay S.
2003-10-01
In recent years there has been increased research activity in the experimental and numerical study of gas-solid flows. Flows of this type have numerous applications in the energy, pharmaceuticals, and chemicals process industries. Typical applications include pulverized coal combustion, flow and heat transfer in bubbling and circulating fluidized beds, hopper and chute flows, pneumatic transport of pharmaceutical powders and pellets, and many more. The present work addresses the study of gas-solid flows using computational fluid dynamics (CFD) techniques and discrete element simulation methods (DES) combined. Many previous studies of coupled gas-solid flows have been performed assuming the solid phase as a continuum with averaged properties and treating the gas-solid flow as constituting of interpenetrating continua. Instead, in the present work, the gas phase flow is simulated using continuum theory and the solid phase flow is simulated using DES. DES treats each solid particle individually, thus accounting for its dynamics due to particle-particle interactions, particle-wall interactions as well as fluid drag and buoyancy. The present work involves developing efficient DES methods for dense granular flow and coupling this simulation to continuum simulations of the gas phase flow. Simulations have been performed to observe pure granular behavior in vibrating beds. Benchmark cases have been simulated and the results obtained match the published literature. The dimensionless acceleration amplitude and the bed height are the parameters governing bed behavior. Various interesting behaviors such as heaping, round and cusp surface standing waves, as well as kinks, have been observed for different values of the acceleration amplitude for a given bed height. Furthermore, binary granular mixtures (granular mixtures with two particle sizes) in a vibrated bed have also been studied. Gas-solid flow simulations have been performed to study fluidized beds. Benchmark 2D fluidized bed simulations have been performed and the results have been shown to satisfactorily compare with those published in the literature. A comprehensive study of the effect of drag correlations on the simulation of fluidized beds has been performed. It has been found that nearly all the drag correlations studied make similar predictions of global quantities such as the time-dependent pressure drop, bubbling frequency and growth. In conclusion, discrete element simulation has been successfully coupled to continuum gas-phase. Though all the results presented in the thesis are two-dimensional, the present implementation is completely three dimensional and can be used to study 3D fluidized beds to aid in better design and understanding. Other industrially important phenomena like particle coating, coal gasification etc., and applications in emerging areas such as nano-particle/fluid mixtures can also be studied through this type of simulation. (Abstract shortened by UMI.)
Theory of relativistic Brownian motion: the (1+3) -dimensional case.
Dunkel, Jörn; Hänggi, Peter
2005-09-01
A theory for (1+3) -dimensional relativistic Brownian motion under the influence of external force fields is put forward. Starting out from a set of relativistically covariant, but multiplicative Langevin equations we describe the relativistic stochastic dynamics of a forced Brownian particle. The corresponding Fokker-Planck equations are studied in the laboratory frame coordinates. In particular, the stochastic integration prescription--i.e., the discretization rule dilemma--is elucidated (prepoint discretization rule versus midpoint discretization rule versus postpoint discretization rule). Remarkably, within our relativistic scheme we find that the postpoint rule (or the transport form) yields the only Fokker-Planck dynamics from which the relativistic Maxwell-Boltzmann statistics is recovered as the stationary solution. The relativistic velocity effects become distinctly more pronounced by going from one to three spatial dimensions. Moreover, we present numerical results for the asymptotic mean-square displacement of a free relativistic Brownian particle moving in 1+3 dimensions.
Thermodynamics of phase-separating nanoalloys: Single particles and particle assemblies
NASA Astrophysics Data System (ADS)
Fèvre, Mathieu; Le Bouar, Yann; Finel, Alphonse
2018-05-01
The aim of this paper is to investigate the consequences of finite-size effects on the thermodynamics of nanoparticle assemblies and isolated particles. We consider a binary phase-separating alloy with a negligible atomic size mismatch, and equilibrium states are computed using off-lattice Monte Carlo simulations in several thermodynamic ensembles. First, a semi-grand-canonical ensemble is used to describe infinite assemblies of particles with the same size. When decreasing the particle size, we obtain a significant decrease of the solid/liquid transition temperatures as well as a growing asymmetry of the solid-state miscibility gap related to surface segregation effects. Second, a canonical ensemble is used to analyze the thermodynamic equilibrium of finite monodisperse particle assemblies. Using a general thermodynamic formulation, we show that a particle assembly may split into two subassemblies of identical particles. Moreover, if the overall average canonical concentration belongs to a discrete spectrum, the subassembly concentrations are equal to the semi-grand-canonical equilibrium ones. We also show that the equilibrium of a particle assembly with a prescribed size distribution combines a size effect and the fact that a given particle size assembly can adopt two configurations. Finally, we have considered the thermodynamics of an isolated particle to analyze whether a phase separation can be defined within a particle. When studying rather large nanoparticles, we found that the region in which a two-phase domain can be identified inside a particle is well below the bulk phase diagram, but the concentration of the homogeneous core remains very close to the bulk solubility limit.
NASA Astrophysics Data System (ADS)
Soobiah, Y. I. J.; Espley, J. R.; Connerney, J. E. P.; Gruesbeck, J.; DiBraccio, G. A.; Schneider, N. M.; Jain, S.; Mitchell, D. L.; Mazelle, C. X.; Halekas, J. S.; Andersson, L.; Brain, D.; Lillis, R. J.; McFadden, J. P.; Deighan, J.; McClintock, B.; Jakosky, B. M.; Frahm, R.; Winningham, D.; Coates, A. J.; Holmstrom, M.
2017-12-01
NASA's Mars Atmosphere and Volatile EvolutioN (MAVEN) spacecraft has observed a variety of distinct auroral types at Mars and related processes relevant to the escape of the Martian atmosphere. MAVEN's Imaging Ultraviolet Spectrograph (IUVS) instrument has measured 1) diffuse aurora over widespread regions of Mars' northern hemisphere, 2) discrete aurora spatially confined to localized patches around regions of strong crustal magnetic field and 3) proton aurora from limb brightening of Lyman-α emission. The processes involved in the occurrence of discrete aurora at Mars are not yet well understood. This study presents MAVEN IUVS and Particle and Fields Package (PFP) observations of contemporaneous particle and field signatures and discrete aurora at Mars. Discrete aurora observed in limb scans occur in association with patches of electrons in the optical shadow of Mars. The electron signatures display a range of field aligned (toward Mars) electron energy spectra, from electrons that are not accelerated (sometimes including photoelectron peaks) to accelerated electrons. These are observed in association with a range of magnetic field orientations, from horizontal to radial magnetic field directions. Observations obtained at low altitude over the nightside by MAVEN and the more distant Mars Express' (MEX) Analyzer of Space Plasma and Energetic Atoms (ASPERA-3) are compared to investigate transport of electrons from plasma sheet and `inverted-V' electron signatures from the magnetotail to low altitudes.
Renormalization of Supersymmetric QCD on the Lattice
NASA Astrophysics Data System (ADS)
Costa, Marios; Panagopoulos, Haralambos
2018-03-01
We perform a pilot study of the perturbative renormalization of a Supersymmetric gauge theory with matter fields on the lattice. As a specific example, we consider Supersymmetric N=1 QCD (SQCD). We study the self-energies of all particles which appear in this theory, as well as the renormalization of the coupling constant. To this end we compute, perturbatively to one-loop, the relevant two-point and three-point Green's functions using both dimensional and lattice regularizations. Our lattice formulation involves theWilson discretization for the gluino and quark fields; for gluons we employ the Wilson gauge action; for scalar fields (squarks) we use naive discretization. The gauge group that we consider is SU(Nc), while the number of colors, Nc, the number of flavors, Nf, and the gauge parameter, α, are left unspecified. We obtain analytic expressions for the renormalization factors of the coupling constant (Zg) and of the quark (ZΨ), gluon (Zu), gluino (Zλ), squark (ZA±), and ghost (Zc) fields on the lattice. We also compute the critical values of the gluino, quark and squark masses. Finally, we address the mixing which occurs among squark degrees of freedom beyond tree level: we calculate the corresponding mixing matrix which is necessary in order to disentangle the components of the squark field via an additional finite renormalization.
Supersymmetric QCD on the lattice: An exploratory study
NASA Astrophysics Data System (ADS)
Costa, M.; Panagopoulos, H.
2017-08-01
We perform a pilot study of the perturbative renormalization of a supersymmetric gauge theory with matter fields on the lattice. As a specific example, we consider supersymmetric N =1 QCD (SQCD). We study the self-energies of all particles which appear in this theory, as well as the renormalization of the coupling constant. To this end we compute, perturbatively to one-loop, the relevant two-point and three-point Green's functions using both dimensional and lattice regularizations. Our lattice formulation involves the Wilson discretization for the gluino and quark fields; for gluons we employ the Wilson gauge action; for scalar fields (squarks) we use naïve discretization. The gauge group that we consider is S U (Nc), while the number of colors, Nc, the number of flavors, Nf, and the gauge parameter, α , are left unspecified. We obtain analytic expressions for the renormalization factors of the coupling constant (Zg) and of the quark (Zψ), gluon (Zu), gluino (Zλ), squark (ZA ±), and ghost (Zc) fields on the lattice. We also compute the critical values of the gluino, quark and squark masses. Finally, we address the mixing which occurs among squark degrees of freedom beyond tree level: we calculate the corresponding mixing matrix which is necessary in order to disentangle the components of the squark field via an additional finite renormalization.
Contact-aware simulations of particulate Stokesian suspensions
NASA Astrophysics Data System (ADS)
Lu, Libin; Rahimian, Abtin; Zorin, Denis
2017-10-01
We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.
Daniels, F.
1957-10-15
Gas-cooled solid-moderator type reactors wherein the fissionable fuel and moderator materials are each in the form of solid pebbles, or discrete particles, and are substantially homogeneously mixed in the proper proportion and placed within the core of the reactor are described. The shape of these discrete particles must be such that voids are present between them when mixed together. Helium enters the bottom of the core and passes through the voids between the fuel and moderator particles to absorb the heat generated by the chain reaction. The hot helium gas is drawn off the top of the core and may be passed through a heat exchanger to produce steam.
Bittig, Arne T; Uhrmacher, Adelinde M
2017-01-01
Spatio-temporal dynamics of cellular processes can be simulated at different levels of detail, from (deterministic) partial differential equations via the spatial Stochastic Simulation algorithm to tracking Brownian trajectories of individual particles. We present a spatial simulation approach for multi-level rule-based models, which includes dynamically hierarchically nested cellular compartments and entities. Our approach ML-Space combines discrete compartmental dynamics, stochastic spatial approaches in discrete space, and particles moving in continuous space. The rule-based specification language of ML-Space supports concise and compact descriptions of models and to adapt the spatial resolution of models easily.
NASA Astrophysics Data System (ADS)
Bertin, N.; Upadhyay, M. V.; Pradalier, C.; Capolungo, L.
2015-09-01
In this paper, we propose a novel full-field approach based on the fast Fourier transform (FFT) technique to compute mechanical fields in periodic discrete dislocation dynamics (DDD) simulations for anisotropic materials: the DDD-FFT approach. By coupling the FFT-based approach to the discrete continuous model, the present approach benefits from the high computational efficiency of the FFT algorithm, while allowing for a discrete representation of dislocation lines. It is demonstrated that the computational time associated with the new DDD-FFT approach is significantly lower than that of current DDD approaches when large number of dislocation segments are involved for isotropic and anisotropic elasticity, respectively. Furthermore, for fine Fourier grids, the treatment of anisotropic elasticity comes at a similar computational cost to that of isotropic simulation. Thus, the proposed approach paves the way towards achieving scale transition from DDD to mesoscale plasticity, especially due to the method’s ability to incorporate inhomogeneous elasticity.
1990-03-01
Assmus, E. F., and J. D. Key, "Affine and projective planes", to appear in Discrete Math (Special Coding Theory Issue). 5. Assumus, E. F. and J. D...S. Locke, ’The subchromatic number of a graph", Discrete Math . 74 (1989)33-49. 24. Hedetniemi, S. T., and T. V. Wimer, "K-terminal recursive families...34Designs and geometries with Cayley", submitted to Journal of Symbolic Computation. 34. Key, J. D., "Regular sets in geometries", Annals of Discrete Math . 37
Li, Chuan; Peng, Juan; Liang, Ming
2014-01-01
Oil debris sensors are effective tools to monitor wear particles in lubricants. For in situ applications, surrounding noise and vibration interferences often distort the oil debris signature of the sensor. Hence extracting oil debris signatures from sensor signals is a challenging task for wear particle monitoring. In this paper we employ the maximal overlap discrete wavelet transform (MODWT) with optimal decomposition depth to enhance the wear particle monitoring capability. The sensor signal is decomposed by the MODWT into different depths for detecting the wear particle existence. To extract the authentic particle signature with minimal distortion, the root mean square deviation of kurtosis value of the segmented signal residue is adopted as a criterion to obtain the optimal decomposition depth for the MODWT. The proposed approach is evaluated using both simulated and experimental wear particles. The results show that the present method can improve the oil debris monitoring capability without structural upgrade requirements. PMID:24686730
Li, Chuan; Peng, Juan; Liang, Ming
2014-03-28
Oil debris sensors are effective tools to monitor wear particles in lubricants. For in situ applications, surrounding noise and vibration interferences often distort the oil debris signature of the sensor. Hence extracting oil debris signatures from sensor signals is a challenging task for wear particle monitoring. In this paper we employ the maximal overlap discrete wavelet transform (MODWT) with optimal decomposition depth to enhance the wear particle monitoring capability. The sensor signal is decomposed by the MODWT into different depths for detecting the wear particle existence. To extract the authentic particle signature with minimal distortion, the root mean square deviation of kurtosis value of the segmented signal residue is adopted as a criterion to obtain the optimal decomposition depth for the MODWT. The proposed approach is evaluated using both simulated and experimental wear particles. The results show that the present method can improve the oil debris monitoring capability without structural upgrade requirements.
General advancing front packing algorithm for the discrete element method
NASA Astrophysics Data System (ADS)
Morfa, Carlos A. Recarey; Pérez Morales, Irvin Pablo; de Farias, Márcio Muniz; de Navarra, Eugenio Oñate Ibañez; Valera, Roberto Roselló; Casañas, Harold Díaz-Guzmán
2018-01-01
A generic formulation of a new method for packing particles is presented. It is based on a constructive advancing front method, and uses Monte Carlo techniques for the generation of particle dimensions. The method can be used to obtain virtual dense packings of particles with several geometrical shapes. It employs continuous, discrete, and empirical statistical distributions in order to generate the dimensions of particles. The packing algorithm is very flexible and allows alternatives for: 1—the direction of the advancing front (inwards or outwards), 2—the selection of the local advancing front, 3—the method for placing a mobile particle in contact with others, and 4—the overlap checks. The algorithm also allows obtaining highly porous media when it is slightly modified. The use of the algorithm to generate real particle packings from grain size distribution curves, in order to carry out engineering applications, is illustrated. Finally, basic applications of the algorithm, which prove its effectiveness in the generation of a large number of particles, are carried out.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hager, Robert, E-mail: rhager@pppl.gov; Yoon, E.S., E-mail: yoone@rpi.edu; Ku, S., E-mail: sku@pppl.gov
2016-06-15
Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. In this article, the non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. The finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable onmore » high-performance computing systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. The collision operator's good weak and strong scaling behavior are shown.« less
NASA Astrophysics Data System (ADS)
Fishkova, T. Ya.
2018-01-01
An optimal set of geometric and electrical parameters of a high-aperture electrostatic charged-particle spectrograph with a range of simultaneously recorded energies of E/ E min = 1-50 has been found by computer simulation, which is especially important for the energy analysis of charged particles during fast processes in various materials. The spectrograph consists of two coaxial electrodes with end faces closed by flat electrodes. The external electrode with a conical-cylindrical form is cut into parts with potentials that increase linearly, except for the last cylindrical part, which is electrically connected to the rear end electrode. The internal cylindrical electrode and the front end electrode are grounded. In the entire energy range, the system is sharply focused on the internal cylindrical electrode, which provides an energy resolution of no worse than 3 × 10-3.
Pastor, José M; Garcimartín, Angel; Gago, Paula A; Peralta, Juan P; Martín-Gómez, César; Ferrer, Luis M; Maza, Diego; Parisi, Daniel R; Pugnaloni, Luis A; Zuriguel, Iker
2015-12-01
The "faster-is-slower" (FIS) effect was first predicted by computer simulations of the egress of pedestrians through a narrow exit [D. Helbing, I. J. Farkas, and T. Vicsek, Nature (London) 407, 487 (2000)]. FIS refers to the finding that, under certain conditions, an excess of the individuals' vigor in the attempt to exit causes a decrease in the flow rate. In general, this effect is identified by the appearance of a minimum when plotting the total evacuation time of a crowd as a function of the pedestrian desired velocity. Here, we experimentally show that the FIS effect indeed occurs in three different systems of discrete particles flowing through a constriction: (a) humans evacuating a room, (b) a herd of sheep entering a barn, and (c) grains flowing out a 2D hopper over a vibrated incline. This finding suggests that FIS is a universal phenomenon for active matter passing through a narrowing.
Geometry of discrete quantum computing
NASA Astrophysics Data System (ADS)
Hanson, Andrew J.; Ortiz, Gerardo; Sabry, Amr; Tai, Yu-Tsung
2013-05-01
Conventional quantum computing entails a geometry based on the description of an n-qubit state using 2n infinite precision complex numbers denoting a vector in a Hilbert space. Such numbers are in general uncomputable using any real-world resources, and, if we have the idea of physical law as some kind of computational algorithm of the universe, we would be compelled to alter our descriptions of physics to be consistent with computable numbers. Our purpose here is to examine the geometric implications of using finite fields Fp and finite complexified fields \\mathbf {F}_{p^2} (based on primes p congruent to 3 (mod4)) as the basis for computations in a theory of discrete quantum computing, which would therefore become a computable theory. Because the states of a discrete n-qubit system are in principle enumerable, we are able to determine the proportions of entangled and unentangled states. In particular, we extend the Hopf fibration that defines the irreducible state space of conventional continuous n-qubit theories (which is the complex projective space \\mathbf {CP}^{2^{n}-1}) to an analogous discrete geometry in which the Hopf circle for any n is found to be a discrete set of p + 1 points. The tally of unit-length n-qubit states is given, and reduced via the generalized Hopf fibration to \\mathbf {DCP}^{2^{n}-1}, the discrete analogue of the complex projective space, which has p^{2^{n}-1} (p-1)\\,\\prod _{k=1}^{n-1} ( p^{2^{k}}+1) irreducible states. Using a measure of entanglement, the purity, we explore the entanglement features of discrete quantum states and find that the n-qubit states based on the complexified field \\mathbf {F}_{p^2} have pn(p - 1)n unentangled states (the product of the tally for a single qubit) with purity 1, and they have pn + 1(p - 1)(p + 1)n - 1 maximally entangled states with purity zero.
NASA Astrophysics Data System (ADS)
Liu, D.; Fu, X.; Liu, X.
2016-12-01
In nature, granular materials exist widely in water bodies. Understanding the fundamentals of solid-liquid two-phase flow, such as turbulent sediment-laden flow, is of importance for a wide range of applications. A coupling method combining computational fluid dynamics (CFD) and discrete element method (DEM) is now widely used for modeling such flows. In this method, when particles are significantly larger than the CFD cells, the fluid field around each particle should be fully resolved. On the other hand, the "unresolved" model is designed for the situation where particles are significantly smaller than the mesh cells. Using "unresolved" model, large amount of particles can be simulated simultaneously. However, there is a gap between these two situations when the size of DEM particles and CFD cell is in the same order of magnitude. In this work, the most commonly used void fraction models are tested with numerical sedimentation experiments. The range of applicability for each model is presented. Based on this, a new void fraction model, i.e., a modified version of "tri-linear" model, is proposed. Particular attention is paid to the smooth function of void fraction in order to avoid numerical instability. The results show good agreement with the experimental data and analytical solution for both single-particle motion and also group-particle motion, indicating great potential of the new void fraction model.
The discrete adjoint method for parameter identification in multibody system dynamics.
Lauß, Thomas; Oberpeilsteiner, Stefan; Steiner, Wolfgang; Nachbagauer, Karin
2018-01-01
The adjoint method is an elegant approach for the computation of the gradient of a cost function to identify a set of parameters. An additional set of differential equations has to be solved to compute the adjoint variables, which are further used for the gradient computation. However, the accuracy of the numerical solution of the adjoint differential equation has a great impact on the gradient. Hence, an alternative approach is the discrete adjoint method , where the adjoint differential equations are replaced by algebraic equations. Therefore, a finite difference scheme is constructed for the adjoint system directly from the numerical time integration method. The method provides the exact gradient of the discretized cost function subjected to the discretized equations of motion.
Dynamical clustering of red blood cells in capillary vessels.
Boryczko, Krzysztof; Dzwinel, Witold; Yuen, David A
2003-02-01
We have modeled the dynamics of a 3-D system consisting of red blood cells (RBCs), plasma and capillary walls using a discrete-particle approach. The blood cells and capillary walls are composed of a mesh of particles interacting with harmonic forces between nearest neighbors. We employ classical mechanics to mimic the elastic properties of RBCs with a biconcave disk composed of a mesh of spring-like particles. The fluid particle method allows for modeling the plasma as a particle ensemble, where each particle represents a collective unit of fluid, which is defined by its mass, moment of inertia, translational and angular momenta. Realistic behavior of blood cells is modeled by considering RBCs and plasma flowing through capillaries of various shapes. Three types of vessels are employed: a pipe with a choking point, a curved vessel and bifurcating capillaries. There is a strong tendency to produce RBC clusters in capillaries. The choking points and other irregularities in geometry influence both the flow and RBC shapes, considerably increasing the clotting effect. We also discuss other clotting factors coming from the physical properties of blood, such as the viscosity of the plasma and the elasticity of the RBCs. Modeling has been carried out with adequate resolution by using 1 to 10 million particles. Discrete particle simulations open a new pathway for modeling the dynamics of complex, viscoelastic fluids at the microscale, where both liquid and solid phases are treated with discrete particles. Figure A snapshot from fluid particle simulation of RBCs flowing along a curved capillary. The red color corresponds to the highest velocity. We can observe aggregation of RBCs at places with the most stagnant plasma flow.
Computing the Feasible Spaces of Optimal Power Flow Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molzahn, Daniel K.
The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less
Computing the Feasible Spaces of Optimal Power Flow Problems
Molzahn, Daniel K.
2017-03-15
The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less
NASA Astrophysics Data System (ADS)
Vogman, Genia
Plasmas are made up of charged particles whose short-range and long-range interactions give rise to complex behavior that can be difficult to fully characterize experimentally. One of the most complete theoretical descriptions of a plasma is that of kinetic theory, which treats each particle species as a probability distribution function in a six-dimensional position-velocity phase space. Drawing on statistical mechanics, these distribution functions mathematically represent a system of interacting particles without tracking individual ions and electrons. The evolution of the distribution function(s) is governed by the Boltzmann equation coupled to Maxwell's equations, which together describe the dynamics of the plasma and the associated electromagnetic fields. When collisions can be neglected, the Boltzmann equation is reduced to the Vlasov equation. High-fidelity simulation of the rich physics in even a subset of the full six-dimensional phase space calls for low-noise high-accuracy numerical methods. To that end, this dissertation investigates a fourth-order finite-volume discretization of the Vlasov-Maxwell equation system, and addresses some of the fundamental challenges associated with applying these types of computationally intensive enhanced-accuracy numerical methods to phase space simulations. The governing equations of kinetic theory are described in detail, and their conservation-law weak form is derived for Cartesian and cylindrical phase space coordinates. This formulation is well known when it comes to Cartesian geometries, as it is used in finite-volume and finite-element discretizations to guarantee local conservation for numerical solutions. By contrast, the conservation-law weak form of the Vlasov equation in cylindrical phase space coordinates is largely unexplored, and to the author's knowledge has never previously been solved numerically. Thereby the methods described in this dissertation for simulating plasmas in cylindrical phase space coordinates present a new development in the field of computational plasma physics. A fourth-order finite-volume method for solving the Vlasov-Maxwell equation system is presented first for Cartesian and then for cylindrical phase space coordinates. Special attention is given to the treatment of the discrete primary variables and to the quadrature rule for evaluating the surface and line integrals that appear in the governing equations. The finite-volume treatment of conducting wall and axis boundaries is particularly nuanced when it comes to phase space coordinates, and is described in detail. In addition to the mechanics of each part of the finite-volume discretization in the two different coordinate systems, the complete algorithm is also presented. The Cartesian coordinate discretization is applied to several well-known test problems. Since even linear analysis of kinetic theory governing equations is complicated on account of velocity being an independent coordinate, few analytic or semi-analytic predictions exist. Benchmarks are particularly scarce for configurations that have magnetic fields and involve more than two phase space dimensions. Ensuring that simulations are true to the physics thus presents a difficulty in the development of robust numerical methods. The research described in this dissertation addresses this challenge through the development of more complete physics-based benchmarks based on the Dory-Guest-Harris instability. The instability is a special case of perpendicularly-propagating kinetic electrostatic waves in a warm uniformly magnetized plasma. A complete derivation of the closed-form linear theory dispersion relation for the instability is presented. The electric field growth rates and oscillation frequencies specified by the dispersion relation provide concrete measures against which simulation results can be quantitatively compared. Furthermore, a specialized form of perturbation is shown to strongly excite the fastest growing mode. The fourth-order finite-volume algorithm is benchmarked against the instability, and is demonstrated to have good convergence properties and close agreement with theoretical growth rate and oscillation frequency predictions. The Dory-Guest-Harris instability benchmark extends the scope of standard test problems by providing a substantive means of validating continuum kinetic simulations of warm magnetized plasmas in higher-dimensional 3D ( x,vx,vy) phase space. The linear theory analysis, initial conditions, algorithm description, and comparisons between theoretical predictions and simulation results are presented. The cylindrical coordinate finite-volume discretization is applied to model axisymmetric systems. Since mitigating the prohibitive computational cost of simulating six dimensions is another challenge in phase space simulations, the development of a robust means of exploiting symmetry is a major advance when it comes to numerically solving the Vlasov-Maxwell equation system. The discretization is applied to a uniform distribution function to assess the nature of the singularity at the axis, and is demonstrated to converge at fourth-order accuracy. The numerical method is then applied to simulate electrostatic ion confinement in an axisymmetric Z-pinch configuration. To the author's knowledge this presents the first instance of a conservative finite-volume discretization of the cylindrical coordinate Vlasov equation. The computational framework for the Vlasov-Maxwell solver is described, and an outlook for future research is presented.
Computing anticipatory systems with incursion and hyperincursion
NASA Astrophysics Data System (ADS)
Dubois, Daniel M.
1998-07-01
An anticipatory system is a system which contains a model of itself and/or of its environment in view of computing its present state as a function of the prediction of the model. With the concepts of incursion and hyperincursion, anticipatory discrete systems can be modelled, simulated and controlled. By definition an incursion, an inclusive or implicit recursion, can be written as: x(t+1)=F[…,x(t-1),x(t),x(t+1),…] where the value of a variable x(t+1) at time t+1 is a function of this variable at past, present and future times. This is an extension of recursion. Hyperincursion is an incursion with multiple solutions. For example, chaos in the Pearl-Verhulst map model: x(t+1)=a.x(t).[1-x(t)] is controlled by the following anticipatory incursive model: x(t+1)=a.x(t).[1-x(t+1)] which corresponds to the differential anticipatory equation: dx(t)/dt=a.x(t).[1-x(t+1)]-x(t). The main part of this paper deals with the discretisation of differential equation systems of linear and non-linear oscillators. The non-linear oscillator is based on the Lotka-Volterra equations model. The discretisation is made by incursion. The incursive discrete equation system gives the same stability condition than the original differential equations without numerical instabilities. The linearisation of the incursive discrete non-linear Lotka-Volterra equation system gives rise to the classical harmonic oscillator. The incursive discretisation of the linear oscillator is similar to define backward and forward discrete derivatives. A generalized complex derivative is then considered and applied to the harmonic oscillator. Non-locality seems to be a property of anticipatory systems. With some mathematical assumption, the Schrödinger quantum equation is derived for a particle in a uniform potential. Finally an hyperincursive system is given in the case of a neural stack memory.
Particle Filter Based Tracking in a Detection Sparse Discrete Event Simulation Environment
2007-03-01
obtained by disqualifying a large number of particles. 52 (a) (b) ( c ) Figure 31. Particle Disqualification via Sanitization b...1 B. RESEARCH APPROACH..............................................................................5 C . THESIS ORGANIZATION...38 b. Detection Distribution Sampling............................................43 c . Estimated Position Calculation
Wavepacket dynamics in a family of nonlinear Fibonacci lattices
NASA Astrophysics Data System (ADS)
Pandey, Mohit; Campbell, David
We examine the dynamics of a quantum particle in a variety of one-dimensional Fibonacci lattices (which are shifted from each other) in the presence of interaction. To describe the nonlinear interactions we employ the discrete nonlinear Schrödinger (DNLS) equation. Using a single-site localized state in the lattice as our initial condition, we evolve the wavepacket numerically using DNLS equation. We compute the root-mean-square width of the wavepacket as it evolves in time and show how the ``global location'' of initial wavepacket affects the dynamics. We compare and contrast our results with earlier studies of related but distinct models.
NASA Astrophysics Data System (ADS)
Philpott, Lydia
2010-09-01
Central to the development of any new theory is the investigation of the observable consequences of the theory. In the search for quantum gravity, research in phenomenology has been dominated by models violating Lorentz invariance (LI) -- despite there being, at present, no evidence that LI is violated. Causal set theory is a LI candidate theory of QG that seeks not to quantise gravity as such, but rather to develop a new understanding of the universe from which both GR and QM could arise separately. The key hypothesis is that spacetime is a discrete partial order: a set of events where the partial ordering is the physical causal ordering between the events. This thesis investigates Lorentz invariant QG phenomenology motivated by the causal set approach. Massive particles propagating in a discrete spacetime will experience diffusion in both position and momentum in proper time. This thesis considers this idea in more depth, providing a rigorous derivation of the diffusion equation in terms of observable cosmic time. The diffusion behaviour does not depend on any particular underlying particle model. Simulations of three different models are conducted, revealing behaviour that matches the diffusion equation despite limitations on the size of causal set simulated. The effect of spacetime discreteness on the behaviour of massless particles is also investigated. Diffusion equations in both affine time and cosmic time are derived, and it is found that massless particles undergo diffusion and drift in energy. Constraints are placed on the magnitudes of the drift and diffusion parameters by considering the blackbody nature of the CMB. Spacetime discreteness also has a potentially observable effect on photon polarisation. For linearly polarised photons, underlying discreteness is found to cause a rotation in polarisation angle and a suppression in overall polarisation.
On discrete control of nonlinear systems with applications to robotics
NASA Technical Reports Server (NTRS)
Eslami, Mansour
1989-01-01
Much progress has been reported in the areas of modeling and control of nonlinear dynamic systems in a continuous-time framework. From implementation point of view, however, it is essential to study these nonlinear systems directly in a discrete setting that is amenable for interfacing with digital computers. But to develop discrete models and discrete controllers for a nonlinear system such as robot is a nontrivial task. Robot is also inherently a variable-inertia dynamic system involving additional complications. Not only the computer-oriented models of these systems must satisfy the usual requirements for such models, but these must also be compatible with the inherent capabilities of computers and must preserve the fundamental physical characteristics of continuous-time systems such as the conservation of energy and/or momentum. Preliminary issues regarding discrete systems in general and discrete models of a typical industrial robot that is developed with full consideration of the principle of conservation of energy are presented. Some research on the pertinent tactile information processing is reviewed. Finally, system control methods and how to integrate these issues in order to complete the task of discrete control of a robot manipulator are also reviewed.
Zhang, Qing; Beard, Daniel A; Schlick, Tamar
2003-12-01
Salt-mediated electrostatics interactions play an essential role in biomolecular structures and dynamics. Because macromolecular systems modeled at atomic resolution contain thousands of solute atoms, the electrostatic computations constitute an expensive part of the force and energy calculations. Implicit solvent models are one way to simplify the model and associated calculations, but they are generally used in combination with standard atomic models for the solute. To approximate electrostatics interactions in models on the polymer level (e.g., supercoiled DNA) that are simulated over long times (e.g., milliseconds) using Brownian dynamics, Beard and Schlick have developed the DiSCO (Discrete Surface Charge Optimization) algorithm. DiSCO represents a macromolecular complex by a few hundred discrete charges on a surface enclosing the system modeled by the Debye-Hückel (screened Coulombic) approximation to the Poisson-Boltzmann equation, and treats the salt solution as continuum solvation. DiSCO can represent the nucleosome core particle (>12,000 atoms), for example, by 353 discrete surface charges distributed on the surfaces of a large disk for the nucleosome core particle and a slender cylinder for the histone tail; the charges are optimized with respect to the Poisson-Boltzmann solution for the electric field, yielding a approximately 5.5% residual. Because regular surfaces enclosing macromolecules are not sufficiently general and may be suboptimal for certain systems, we develop a general method to construct irregular models tailored to the geometry of macromolecules. We also compare charge optimization based on both the electric field and electrostatic potential refinement. Results indicate that irregular surfaces can lead to a more accurate approximation (lower residuals), and the refinement in terms of the electric field is more robust. We also show that surface smoothing for irregular models is important, that the charge optimization (by the TNPACK minimizer) is efficient and does not depend on the initial assigned values, and that the residual is acceptable when the distance to the model surface is close to, or larger than, the Debye length. We illustrate applications of DiSCO's model-building procedure to chromatin folding and supercoiled DNA bound to Hin and Fis proteins. DiSCO is generally applicable to other interesting macromolecular systems for which mesoscale models are appropriate, to yield a resolution between the all-atom representative and the polymer level. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 2063-2074, 2003
NASA Astrophysics Data System (ADS)
Ovaysi, S.; Piri, M.
2009-12-01
We present a three-dimensional fully dynamic parallel particle-based model for direct pore-level simulation of incompressible viscous fluid flow in disordered porous media. The model was developed from scratch and is capable of simulating flow directly in three-dimensional high-resolution microtomography images of naturally occurring or man-made porous systems. It reads the images as input where the position of the solid walls are given. The entire medium, i.e., solid and fluid, is then discretized using particles. The model is based on Moving Particle Semi-implicit (MPS) technique. We modify this technique in order to improve its stability. The model handles highly irregular fluid-solid boundaries effectively. It takes into account viscous pressure drop in addition to the gravity forces. It conserves mass and can automatically detect any false connectivity with fluid particles in the neighboring pores and throats. It includes a sophisticated algorithm to automatically split and merge particles to maintain hydraulic connectivity of extremely narrow conduits. Furthermore, it uses novel methods to handle particle inconsistencies and open boundaries. To handle the computational load, we present a fully parallel version of the model that runs on distributed memory computer clusters and exhibits excellent scalability. The model is used to simulate unsteady-state flow problems under different conditions starting from straight noncircular capillary tubes with different cross-sectional shapes, i.e., circular/elliptical, square/rectangular and triangular cross-sections. We compare the predicted dimensionless hydraulic conductances with the data available in the literature and observe an excellent agreement. We then test the scalability of our parallel model with two samples of an artificial sandstone, samples A and B, with different volumes and different distributions (non-uniform and uniform) of solid particles among the processors. An excellent linear scalability is obtained for sample B that has more uniform distribution of solid particles leading to a superior load balancing. The model is then used to simulate fluid flow directly in REV size three-dimensional x-ray images of a naturally occurring sandstone. We analyze the quality and consistency of the predicted flow behavior and calculate absolute permeability, which compares well with the available network modeling and Lattice-Boltzmann permeabilities available in the literature for the same sandstone. We show that the model conserves mass very well and is stable computationally even at very narrow fluid conduits. The transient- and the steady-state fluid flow patterns are presented as well as the steady-state flow rates to compute absolute permeability. Furthermore, we discuss the vital role of our adaptive particle resolution scheme in preserving the original pore connectivity of the samples and their narrow channels through splitting and merging of fluid particles.
Dessouky, Mohamed M; Elrashidy, Mohamed A; Taha, Taha E; Abdelkader, Hatem M
2016-05-01
The different discrete transform techniques such as discrete cosine transform (DCT), discrete sine transform (DST), discrete wavelet transform (DWT), and mel-scale frequency cepstral coefficients (MFCCs) are powerful feature extraction techniques. This article presents a proposed computer-aided diagnosis (CAD) system for extracting the most effective and significant features of Alzheimer's disease (AD) using these different discrete transform techniques and MFCC techniques. Linear support vector machine has been used as a classifier in this article. Experimental results conclude that the proposed CAD system using MFCC technique for AD recognition has a great improvement for the system performance with small number of significant extracted features, as compared with the CAD system based on DCT, DST, DWT, and the hybrid combination methods of the different transform techniques. © The Author(s) 2015.
Investigation into discretization methods of the six-parameter Iwan model
NASA Astrophysics Data System (ADS)
Li, Yikun; Hao, Zhiming; Feng, Jiaquan; Zhang, Dingguo
2017-02-01
Iwan model is widely applied for the purpose of describing nonlinear mechanisms of jointed structures. In this paper, parameter identification procedures of the six-parameter Iwan model based on joint experiments with different preload techniques are performed. Four kinds of discretization methods deduced from stiffness equation of the six-parameter Iwan model are provided, which can be used to discretize the integral-form Iwan model into a sum of finite Jenkins elements. In finite element simulation, the influences of discretization methods and numbers of Jenkins elements on computing accuracy are discussed. Simulation results indicate that a higher accuracy can be obtained with larger numbers of Jenkins elements. It is also shown that compared with other three kinds of discretization methods, the geometric series discretization based on stiffness provides the highest computing accuracy.
Variational symplectic algorithm for guiding center dynamics in the inner magnetosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Jinxing; Pu Zuyin; Xie Lun
Charged particle dynamics in magnetosphere has temporal and spatial multiscale; therefore, numerical accuracy over a long integration time is required. A variational symplectic integrator (VSI) [H. Qin and X. Guan, Phys. Rev. Lett. 100, 035006 (2008) and H. Qin, X. Guan, and W. M. Tang, Phys. Plasmas 16, 042510 (2009)] for the guiding-center motion of charged particles in general magnetic field is applied to study the dynamics of charged particles in magnetosphere. Instead of discretizing the differential equations of the guiding-center motion, the action of the guiding-center motion is discretized and minimized to obtain the iteration rules for advancing themore » dynamics. The VSI conserves exactly a discrete Lagrangian symplectic structure and has better numerical properties over a long integration time, compared with standard integrators, such as the standard and adaptive fourth order Runge-Kutta (RK4) methods. Applying the VSI method to guiding-center dynamics in the inner magnetosphere, we can accurately calculate the particles'orbits for an arbitrary long simulating time with good conservation property. When a time-independent convection and corotation electric field is considered, the VSI method can give the accurate single particle orbit, while the RK4 method gives an incorrect orbit due to its intrinsic error accumulation over a long integrating time.« less
Parallel and Distributed Computing Combinatorial Algorithms
1993-10-01
Discrete Math , 1991. In press. [551 L. Finkelstein, D. Kleitman, and T. Leighton. Applying the classification theorem for finite simple groups to minimize...Mathematics (in press). [741 L. Heath, T. Leighton, and A. Rosenberg. Comparing queue and stack layouts. SIAM J Discrete Math , 5(3):398-412, August 1992...line can meet only a few. DIMA CS Series in Discrete Math and Theoretical Computer Science, 9, 1993. Publications, Presentations and Theses Supported
Chen, Shaojiang; Sorge, Lukas P; Seo, Dong-Kyun
2017-12-07
We report the synthesis and characterization of hydroxycancrinite zeolite nanorods by a simple hydrothermal treatment of aluminosilicate hydrogels at high concentrations of precursors without the use of structure-directing agents. Transmission electron microscopy (TEM) analysis reveals that cancrinite nanorods, with lengths of 200-800 nm and diameters of 30-50 nm, exhibit a hexagonal morphology and are elongated along the crystallographic c direction. The powder X-ray diffraction (PXRD), Fourier transform infrared (FT-IR) and TEM studies revealed sequential events of hydrogel formation, the formation of aggregated sodalite nuclei, the conversion of sodalite to cancrinite and finally the growth of cancrinite nanorods into discrete particles. The aqueous dispersion of the discrete nanorods displays a good stability between pH 6-12 with the zeta potential no greater than -30 mV. The synthesis is unique in that the initial aggregated nanocrystals do not grow into microsized particles (aggregative growth) but into discrete nanorods. Our findings demonstrate an unconventional possibility that discrete zeolite nanocrystals could be produced from a concentrated hydrogel.
The discrete hungry Lotka Volterra system and a new algorithm for computing matrix eigenvalues
NASA Astrophysics Data System (ADS)
Fukuda, Akiko; Ishiwata, Emiko; Iwasaki, Masashi; Nakamura, Yoshimasa
2009-01-01
The discrete hungry Lotka-Volterra (dhLV) system is a generalization of the discrete Lotka-Volterra (dLV) system which stands for a prey-predator model in mathematical biology. In this paper, we show that (1) some invariants exist which are expressed by dhLV variables and are independent from the discrete time and (2) a dhLV variable converges to some positive constant or zero as the discrete time becomes sufficiently large. Some characteristic polynomial is then factorized with the help of the dhLV system. The asymptotic behaviour of the dhLV system enables us to design an algorithm for computing complex eigenvalues of a certain band matrix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favorite, Jeffrey A.
The Second-Level Adjoint Sensitivity System (2nd-LASS) that yields the second-order sensitivities of a response of uncollided particles with respect to isotope densities, cross sections, and source emission rates is derived in Refs. 1 and 2. In Ref. 2, we solved problems for the uncollided leakage from a homogeneous sphere and a multiregion cylinder using the PARTISN multigroup discrete-ordinates code. In this memo, we derive solutions of the 2nd-LASS for the particular case when the response is a flux or partial current density computed at a single point on the boundary, and the inner products are computed using ray-tracing. Both themore » PARTISN approach and the ray-tracing approach are implemented in a computer code, SENSPG. The next section of this report presents the equations of the 1st- and 2nd-LASS for uncollided particles and the first- and second-order sensitivities that use the solutions of the 1st- and 2nd-LASS. Section III presents solutions of the 1st- and 2nd-LASS equations for the case of ray-tracing from a detector point. Section IV presents specific solutions of the 2nd-LASS and derives the ray-trace form of the inner products needed for second-order sensitivities. Numerical results for the total leakage from a homogeneous sphere are presented in Sec. V and for the leakage from one side of a two-region slab in Sec. VI. Section VII is a summary and conclusions.« less
Hardman, David; Doyle, Barry J; Semple, Scott I K; Richards, Jennifer M J; Newby, David E; Easson, William J; Hoskins, Peter R
2013-10-01
In abdominal aortic aneurysm disease, the aortic wall is exposed to intense biological activity involving inflammation and matrix metalloproteinase-mediated degradation of the extracellular matrix. These processes are orchestrated by monocytes and rather than affecting the aorta uniformly, damage and weaken focal areas of the wall leaving it vulnerable to rupture. This study attempts to model numerically the deposition of monocytes using large eddy simulation, discrete phase modelling and near-wall particle residence time. The model was first applied to idealised aneurysms and then to three patient-specific lumen geometries using three-component inlet velocities derived from phase-contrast magnetic resonance imaging. The use of a novel, variable wall shear stress-limiter based on previous experimental data significantly improved the results. Simulations identified a critical diameter (1.8 times the inlet diameter) beyond which significant monocyte deposition is expected to occur. Monocyte adhesion occurred proximally in smaller abdominal aortic aneurysms and distally as the sac expands. The near-wall particle residence time observed in each of the patient-specific models was markedly different. Discrete hotspots of monocyte residence time were detected, suggesting that the monocyte infiltration responsible for the breakdown of the abdominal aortic aneurysm wall occurs heterogeneously. Peak monocyte residence time was found to increase with aneurysm sac size. Further work addressing certain limitations is needed in a larger cohort to determine clinical significance.
The Relationship between Self-Assembly and Conformal Mappings
NASA Astrophysics Data System (ADS)
Duque, Carlos; Santangelo, Christian
The isotropic growth of a thin sheet has been used as a way to generate programmed shapes through controlled buckling. We discuss how conformal mappings, which are transformations that locally preserve angles, provide a way to quantify the area growth needed to produce a particular shape. A discrete version of the conformal map can be constructed from circle packings, which are maps between packings of circles whose contact network is preserved. This provides a link to the self-assembly of particles on curved surfaces. We performed simulations of attractive particles on a curved surface using molecular dynamics. The resulting particle configurations were used to generate the corresponding discrete conformal map, allowing us to quantify the degree of area distortion required to produce a particular shape by finding particle configurations that minimize the area distortion.
Two-walker discrete-time quantum walks on the line with percolation
NASA Astrophysics Data System (ADS)
Rigovacca, L.; di Franco, C.
2016-02-01
One goal in the quantum-walk research is the exploitation of the intrinsic quantum nature of multiple walkers, in order to achieve the full computational power of the model. Here we study the behaviour of two non-interacting particles performing a quantum walk on the line when the possibility of lattice imperfections, in the form of missing links, is considered. We investigate two regimes, statical and dynamical percolation, that correspond to different time scales for the imperfections evolution with respect to the quantum-walk one. By studying the qualitative behaviour of three two-particle quantities for different probabilities of having missing bonds, we argue that the chosen symmetry under particle-exchange of the input state strongly affects the output of the walk, even in noisy and highly non-ideal regimes. We provide evidence against the possibility of gathering information about the walkers indistinguishability from the observation of bunching phenomena in the output distribution, in all those situations that require a comparison between averaged quantities. Although the spread of the walk is not substantially changed by the addition of a second particle, we show that the presence of multiple walkers can be beneficial for a procedure to estimate the probability of having a broken link.
Incompressible SPH method for simulating Newtonian and non-Newtonian flows with a free surface
NASA Astrophysics Data System (ADS)
Shao, Songdong; Lo, Edmond Y. M.
An incompressible smoothed particle hydrodynamics (SPH) method is presented to simulate Newtonian and non-Newtonian flows with free surfaces. The basic equations solved are the incompressible mass conservation and Navier-Stokes equations. The method uses prediction-correction fractional steps with the temporal velocity field integrated forward in time without enforcing incompressibility in the prediction step. The resulting deviation of particle density is then implicitly projected onto a divergence-free space to satisfy incompressibility through a pressure Poisson equation derived from an approximate pressure projection. Various SPH formulations are employed in the discretization of the relevant gradient, divergence and Laplacian terms. Free surfaces are identified by the particles whose density is below a set point. Wall boundaries are represented by particles whose positions are fixed. The SPH formulation is also extended to non-Newtonian flows and demonstrated using the Cross rheological model. The incompressible SPH method is tested by typical 2-D dam-break problems in which both water and fluid mud are considered. The computations are in good agreement with available experimental data. The different flow features between Newtonian and non-Newtonian flows after the dam-break are discussed.
METHODOLOGY FOR MEASURING PM 2.5 SEPARATOR CHARACTERISTICS USING AN AEROSIZER
A method is presented that enables the measurement of the particle size separation characteristics of an inertial separator in a rapid fashion. Overall penetration is determined for discrete particle sizes using an Aerosizer (Model LD, TSI, Incorporated, Particle Instruments/Am...
Efficient high-quality volume rendering of SPH data.
Fraedrich, Roland; Auer, Stefan; Westermann, Rüdiger
2010-01-01
High quality volume rendering of SPH data requires a complex order-dependent resampling of particle quantities along the view rays. In this paper we present an efficient approach to perform this task using a novel view-space discretization of the simulation domain. Our method draws upon recent work on GPU-based particle voxelization for the efficient resampling of particles into uniform grids. We propose a new technique that leverages a perspective grid to adaptively discretize the view-volume, giving rise to a continuous level-of-detail sampling structure and reducing memory requirements compared to a uniform grid. In combination with a level-of-detail representation of the particle set, the perspective grid allows effectively reducing the amount of primitives to be processed at run-time. We demonstrate the quality and performance of our method for the rendering of fluid and gas dynamics SPH simulations consisting of many millions of particles.
Noiseless Vlasov-Poisson simulations with linearly transformed particles
Pinto, Martin C.; Sonnendrucker, Eric; Friedman, Alex; ...
2014-06-25
We introduce a deterministic discrete-particle simulation approach, the Linearly-Transformed Particle-In-Cell (LTPIC) method, that employs linear deformations of the particles to reduce the noise traditionally associated with particle schemes. Formally, transforming the particles is justified by local first order expansions of the characteristic flow in phase space. In practice the method amounts of using deformation matrices within the particle shape functions; these matrices are updated via local evaluations of the forward numerical flow. Because it is necessary to periodically remap the particles on a regular grid to avoid excessively deforming their shapes, the method can be seen as a development ofmore » Denavit's Forward Semi-Lagrangian (FSL) scheme (Denavit, 1972 [8]). However, it has recently been established (Campos Pinto, 2012 [20]) that the underlying Linearly-Transformed Particle scheme converges for abstract transport problems, with no need to remap the particles; deforming the particles can thus be seen as a way to significantly lower the remapping frequency needed in the FSL schemes, and hence the associated numerical diffusion. To couple the method with electrostatic field solvers, two specific charge deposition schemes are examined, and their performance compared with that of the standard deposition method. Finally, numerical 1d1v simulations involving benchmark test cases and halo formation in an initially mismatched thermal sheet beam demonstrate some advantages of our LTPIC scheme over the classical PIC and FSL methods. Lastly, benchmarked test cases also indicate that, for numerical choices involving similar computational effort, the LTPIC method is capable of accuracy comparable to or exceeding that of state-of-the-art, high-resolution Vlasov schemes.« less
NASA Astrophysics Data System (ADS)
Diggs, Angela; Balachandar, Sivaramakrishnan
2015-06-01
The present work addresses the numerical methods required for particle-gas and particle-particle interactions in Eulerian-Lagrangian simulations of multiphase flow. Local volume fraction as seen by each particle is the quantity of foremost importance in modeling and evaluating such interactions. We consider a general multiphase flow with a distribution of particles inside a fluid flow discretized on an Eulerian grid. Particle volume fraction is needed both as a Lagrangian quantity associated with each particle and also as an Eulerian quantity associated with the flow. In Eulerian Projection (EP) methods, the volume fraction is first obtained within each cell as an Eulerian quantity and then interpolated to each particle. In Lagrangian Projection (LP) methods, the particle volume fraction is obtained at each particle and then projected onto the Eulerian grid. Traditionally, EP methods are used in multiphase flow, but sub-grid resolution can be obtained through use of LP methods. By evaluating the total error and its components we compare the performance of EP and LP methods. The standard von Neumann error analysis technique has been adapted for rigorous evaluation of rate of convergence. The methods presented can be extended to obtain accurate field representations of other Lagrangian quantities. Most importantly, we will show that such careful attention to numerical methodologies is needed in order to capture complex shock interaction with a bed of particles. Supported by U.S. Department of Defense SMART Program and the U.S. Department of Energy PSAAP-II program under Contract No. DE-NA0002378.
Comparison of algorithms for computing the two-dimensional discrete Hartley transform
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Burton, John C.; Miller, Keith W.
1989-01-01
Three methods have been described for computing the two-dimensional discrete Hartley transform. Two of these employ a separable transform, the third method, the vector-radix algorithm, does not require separability. In-place computation of the vector-radix method is described. Operation counts and execution times indicate that the vector-radix method is fastest.
Hodge numbers for all CICY quotients
NASA Astrophysics Data System (ADS)
Constantin, Andrei; Gray, James; Lukas, Andre
2017-01-01
We present a general method for computing Hodge numbers for Calabi-Yau manifolds realised as discrete quotients of complete intersections in products of projective spaces. The method relies on the computation of equivariant cohomologies and is illustrated for several explicit examples. In this way, we compute the Hodge numbers for all discrete quotients obtained in Braun's classification [1].
PLUME-MoM 1.0: a new 1-D model of volcanic plumes based on the method of moments
NASA Astrophysics Data System (ADS)
de'Michieli Vitturi, M.; Neri, A.; Barsotti, S.
2015-05-01
In this paper a new mathematical model for volcanic plumes, named PlumeMoM, is presented. The model describes the steady-state 1-D dynamics of the plume in a 3-D coordinate system, accounting for continuous variability in particle distribution of the pyroclastic mixture ejected at the vent. Volcanic plumes are composed of pyroclastic particles of many different sizes ranging from a few microns up to several centimeters and more. Proper description of such a multiparticle nature is crucial when quantifying changes in grain-size distribution along the plume and, therefore, for better characterization of source conditions of ash dispersal models. The new model is based on the method of moments, which allows description of the pyroclastic mixture dynamics not only in the spatial domain but also in the space of properties of the continuous size-distribution of the particles. This is achieved by formulation of fundamental transport equations for the multiparticle mixture with respect to the different moments of the grain-size distribution. Different formulations, in terms of the distribution of the particle number, as well as of the mass distribution expressed in terms of the Krumbein log scale, are also derived. Comparison between the new moments-based formulation and the classical approach, based on the discretization of the mixture in N discrete phases, shows that the new model allows the same results to be obtained with a significantly lower computational cost (particularly when a large number of discrete phases is adopted). Application of the new model, coupled with uncertainty quantification and global sensitivity analyses, enables investigation of the response of four key output variables (mean and standard deviation (SD) of the grain-size distribution at the top of the plume, plume height and amount of mass lost by the plume during the ascent) to changes in the main input parameters (mean and SD) characterizing the pyroclastic mixture at the base of the plume. Results show that, for the range of parameters investigated, the grain-size distribution at the top of the plume is remarkably similar to that at the base and that the plume height is only weakly affected by the parameters of the grain distribution.
RNA packaging of MRFV virus-like particles: The interplay between RNA pools and capsid coat protein
USDA-ARS?s Scientific Manuscript database
Virus-like particles (VLPs) can be produced through self-assembly of capsid protein (CP) into particles with discrete shapes and sizes and containing different types of RNA molecules. The general principle that governs particle assembly and RNA packaging is determined by unique interactions between ...
Blocking Mechanism Study of Self-Compacting Concrete Based on Discrete Element Method
NASA Astrophysics Data System (ADS)
Zhang, Xuan; Li, Zhida; Zhang, Zhihua
2017-11-01
In order to study the influence factors of blocking mechanism of Self-Compaction Concrete (SCC), Roussel’s granular blocking model was verified and extended by establishing the discrete element model of SCC. The influence of different parameters on the filling capacity and blocking mechanism of SCC were also investigated. The results showed that: it was feasible to simulate the blocking mechanism of SCC by using Discrete Element Method (DEM). The passing ability of pebble aggregate was superior to the gravel aggregate and the passing ability of hexahedron particles was bigger than tetrahedron particles, while the tetrahedron particle simulation results were closer to the actual situation. The flow of SCC as another significant factor affected the passing ability that with the flow increased, the passing ability increased. The correction coefficient λ of the steel arrangement (channel section shape) and flow rate γ in the block model were introduced that the value of λ was 0.90-0.95 and the maximum casting rate was 7.8 L/min.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Tingwen; Rabha, Swapna; Verma, Vikrant
Geldart Group A particles are of great importance in various chemical processes because of advantages such as ease of fluidization, large surface area, and many other unique properties. It is very challenging to model the fluidization behavior of such particles as widely reported in the literature. In this study, a pseudo-2D experimental column with a width of 5 cm, a height of 45 cm, and a depth of 0.32 cm was developed for detailed measurements of fluidized bed hydrodynamics of fine particles to facilitate the validation of computational fluid dynamic (CFD) modeling. The hydrodynamics of sieved FCC particles (Sauter meanmore » diameter of 148 µm and density of 1300 kg/m3) and NETL-32D sorbents (Sauter mean diameter of 100 µm and density of 480 kg/m3) were investigated mainly through the visualization by a high-speed camera. Numerical simulations were then conducted by using NETL’s open source code MFIX-DEM. Both qualitative and quantitative information including bed expansion, bubble characteristics, and solid movement were compared between the numerical simulations and the experimental measurement. Furthermore, the cohesive van der Waals force was incorporated in the MFIX-DEM simulations and its influences on the flow hydrodynamics were studied.« less
Li, Tingwen; Rabha, Swapna; Verma, Vikrant; ...
2017-09-19
Geldart Group A particles are of great importance in various chemical processes because of advantages such as ease of fluidization, large surface area, and many other unique properties. It is very challenging to model the fluidization behavior of such particles as widely reported in the literature. In this study, a pseudo-2D experimental column with a width of 5 cm, a height of 45 cm, and a depth of 0.32 cm was developed for detailed measurements of fluidized bed hydrodynamics of fine particles to facilitate the validation of computational fluid dynamic (CFD) modeling. The hydrodynamics of sieved FCC particles (Sauter meanmore » diameter of 148 µm and density of 1300 kg/m3) and NETL-32D sorbents (Sauter mean diameter of 100 µm and density of 480 kg/m3) were investigated mainly through the visualization by a high-speed camera. Numerical simulations were then conducted by using NETL’s open source code MFIX-DEM. Both qualitative and quantitative information including bed expansion, bubble characteristics, and solid movement were compared between the numerical simulations and the experimental measurement. Furthermore, the cohesive van der Waals force was incorporated in the MFIX-DEM simulations and its influences on the flow hydrodynamics were studied.« less
A statistical study of gyro-averaging effects in a reduced model of drift-wave transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fonseca, Julio; Del-Castillo-Negrete, Diego B.; Sokolov, Igor M.
2016-08-25
Here, a statistical study of finite Larmor radius (FLR) effects on transport driven by electrostatic driftwaves is presented. The study is based on a reduced discrete Hamiltonian dynamical system known as the gyro-averaged standard map (GSM). In this system, FLR effects are incorporated through the gyro-averaging of a simplified weak-turbulence model of electrostatic fluctuations. Formally, the GSM is a modified version of the standard map in which the perturbation amplitude, K 0, becomes K 0J 0(more » $$\\hat{p}$$), where J 0 is the zeroth-order Bessel function and $$\\hat{p}$$ s the Larmor radius. Assuming a Maxwellian probability density function (pdf) for $$\\hat{p}$$ , we compute analytically and numerically the pdf and the cumulative distribution function of the effective drift-wave perturba- tion amplitude K 0J 0($$\\hat{p}$$). Using these results, we compute the probability of loss of confinement (i.e., global chaos), P c provides an upper bound for the escape rate, and that P t rovides a good estimate of the particle trapping rate. Lastly. the analytical results are compared with direct numerical Monte-Carlo simulations of particle transport.« less
Discrete element modelling of bedload transport
NASA Astrophysics Data System (ADS)
Loyer, A.; Frey, P.
2011-12-01
Discrete element modelling (DEM) has been widely used in solid mechanics and in granular physics. In this type of modelling, each individual particle is taken into account and intergranular interactions are modelled with simple laws (e.g. Coulomb friction). Gravity and contact forces permit to solve the dynamical behaviour of the system. DEM is interesting to model configurations and access to parameters not directly available in laboratory experimentation, hence the term "numerical experimentations" sometimes used to describe DEM. DEM was used to model bedload transport experiments performed at the particle scale with spherical glass beads in a steep and narrow flume. Bedload is the larger material that is transported on the bed on stream channels. It has a great geomorphic impact. Physical processes ruling bedload transport and more generally coarse-particle/fluid systems are poorly known, arguably because granular interactions have been somewhat neglected. An existing DEM code (PFC3D) already computing granular interactions was used. We implemented basic hydrodynamic forces to model the fluid interactions (buoyancy, drag, lift). The idea was to use the minimum number of ingredients to match the experimental results. Experiments were performed with one-size and two-size mixtures of coarse spherical glass beads entrained by a shallow turbulent and supercritical water flow down a steep channel with a mobile bed. The particle diameters were 4 and 6mm, the channel width 6.5mm (about the same width as the coarser particles) and the channel inclination was typically 10%. The water flow rate and the particle rate were kept constant at the upstream entrance and adjusted to obtain bedload transport equilibrium. Flows were filmed from the side by a high-speed camera. Using image processing algorithms made it possible to determine the position, velocity and trajectory of both smaller and coarser particles. Modelled and experimental particle velocity and concentration depth profiles were compared in the case of the one-size mixture. The turbulent fluid velocity profile was prescribed and attached to the variable upper bedline. Provided the upper bedline was calculated with a refined space and time resolution, a fair agreement between DEM and experiments was reached. Experiments with two-size mixtures were designed to study vertical grain size sorting or segregation patterns. Sorting is arguably the reason why the predictive capacity of bedload formulations remains so poor. Modelling of the two-size mixture was also performed and gave promising qualitative results.
Schutyser, M A I; Briels, W J; Boom, R M; Rinzema, A
2004-05-20
The development of mathematical models facilitates industrial (large-scale) application of solid-state fermentation (SSF). In this study, a two-phase model of a drum fermentor is developed that consists of a discrete particle model (solid phase) and a continuum model (gas phase). The continuum model describes the distribution of air in the bed injected via an aeration pipe. The discrete particle model describes the solid phase. In previous work, mixing during SSF was predicted with the discrete particle model, although mixing simulations were not carried out in the current work. Heat and mass transfer between the two phases and biomass growth were implemented in the two-phase model. Validation experiments were conducted in a 28-dm3 drum fermentor. In this fermentor, sufficient aeration was provided to control the temperatures near the optimum value for growth during the first 45-50 hours. Several simulations were also conducted for different fermentor scales. Forced aeration via a single pipe in the drum fermentors did not provide homogeneous cooling in the substrate bed. Due to large temperature gradients, biomass yield decreased severely with increasing size of the fermentor. Improvement of air distribution would be required to avoid the need for frequent mixing events, during which growth is hampered. From these results, it was concluded that the two-phase model developed is a powerful tool to investigate design and scale-up of aerated (mixed) SSF fermentors. Copyright 2004 Wiley Periodicals, Inc.
Modeling of magnetic hystereses in soft MREs filled with NdFeB particles
NASA Astrophysics Data System (ADS)
Kalina, K. A.; Brummund, J.; Metsch, P.; Kästner, M.; Borin, D. Yu; Linke, J. M.; Odenbach, S.
2017-10-01
Herein, we investigate the structure-property relationships of soft magnetorheological elastomers (MREs) filled with remanently magnetizable particles. The study is motivated from experimental results which indicate a large difference between the magnetization loops of soft MREs filled with NdFeB particles and the loops of such particles embedded in a comparatively stiff matrix, e.g. an epoxy resin. We present a microscale model for MREs based on a general continuum formulation of the magnetomechanical boundary value problem which is valid for finite strains. In particular, we develop an energetically consistent constitutive model for the hysteretic magnetization behavior of the magnetically hard particles. The microstructure is discretized and the problem is solved numerically in terms of a coupled nonlinear finite element approach. Since the local magnetic and mechanical fields are resolved explicitly inside the heterogeneous microstructure of the MRE, our model also accounts for interactions of particles close to each other. In order to connect the microscopic fields to effective macroscopic quantities of the MRE, a suitable computational homogenization scheme is used. Based on this modeling approach, it is demonstrated that the observable macroscopic behavior of the considered MREs results from the rotation of the embedded particles. Furthermore, the performed numerical simulations indicate that the reversion of the sample’s magnetization occurs due to a combination of particle rotations and internal domain conversion processes. All of our simulation results obtained for such materials are in a good qualitative agreement with the experiments.
Airglow and star photographs in the daytime from a rocket.
Evans, D C; Dunkelman, L
1969-06-20
Photographs of the constellation Cygnus taken in the daytime from altitudes above 100 kilometers indicate that the day sky brightness in the wave-length region from 3600 to 7000 angstroms is only slightly brighter than the night sky viewed from the ground. No diffuse cloud of particles was apparent in the vicinity of the rocket payload, but discrete particles must be considered in the design of instruments for rockets and satellites. The resultant data and reports of star sightings from manned spacecraft indicate similar optical environments for both types of vehicles, that is, discrete particles and relatively low levels of background brightness, only slightly brighter than the night sky as an upper limit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Edward, E-mail: etjr@auburn.edu; Konopka, Uwe; Lynch, Brian
Dusty plasmas have been studied in argon, radio frequency (rf) glow discharge plasmas at magnetic fields up to 2.5 T where the electrons and ions are strongly magnetized. Plasmas are generated between two parallel plate electrodes where the lower, powered electrode is solid and the upper electrode supports a dual mesh consisting of #24 brass and #30 aluminum wire cloth. In this experiment, we study the formation of imposed ordered structures and particle dynamics as a function of magnetic field. Through observations of trapped particles and the quasi-discrete (i.e., “hopping”) motion of particles between the trapping locations, it is possible tomore » make a preliminary estimate of the potential structure that confines the particles to a grid structure in the plasma. This information is used to gain insight into the formation of the imposed grid pattern of the dust particles in the plasma.« less
NASA Astrophysics Data System (ADS)
Dussi, Simone; Tasios, Nikos; Drwenski, Tara; van Roij, René; Dijkstra, Marjolein
2018-04-01
We use computer simulations to study the existence and stability of a biaxial nematic Nb phase in systems of hard polyhedral cuboids, triangular prisms, and rhombic platelets, characterized by a long (L ), medium (M ), and short (S ) particle axis. For all three shape families, we find stable Nb states provided the shape is not only close to the so-called dual shape with M =√{L S } but also sufficiently anisotropic with L /S >9 ,11 ,14 ,23 for rhombi, (two types of) triangular prisms, and cuboids, respectively, corresponding to anisotropies not considered before. Surprisingly, a direct isotropic-Nb transition does not occur in these systems due to a destabilization of Nb by a smectic (for cuboids and prisms) or a columnar (for platelets) phase at small L /S or by an intervening uniaxial nematic phase at large L /S . Our results are confirmed by a density functional theory provided the third virial coefficient is included and a continuous rather than a discrete (Zwanzig) set of particle orientations is taken into account.
Wiggins, Paul A
2015-07-21
This article describes the application of a change-point algorithm to the analysis of stochastic signals in biological systems whose underlying state dynamics consist of transitions between discrete states. Applications of this analysis include molecular-motor stepping, fluorophore bleaching, electrophysiology, particle and cell tracking, detection of copy number variation by sequencing, tethered-particle motion, etc. We present a unified approach to the analysis of processes whose noise can be modeled by Gaussian, Wiener, or Ornstein-Uhlenbeck processes. To fit the model, we exploit explicit, closed-form algebraic expressions for maximum-likelihood estimators of model parameters and estimated information loss of the generalized noise model, which can be computed extremely efficiently. We implement change-point detection using the frequentist information criterion (which, to our knowledge, is a new information criterion). The frequentist information criterion specifies a single, information-based statistical test that is free from ad hoc parameters and requires no prior probability distribution. We demonstrate this information-based approach in the analysis of simulated and experimental tethered-particle-motion data. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chia, Nicholas; Bundschuh, Ralf
2005-11-01
In the universality class of the one-dimensional Kardar-Parisi-Zhang (KPZ) surface growth, Derrida and Lebowitz conjectured the universality of not only the scaling exponents, but of an entire scaling function. Since and Derrida and Lebowitz’s original publication [Phys. Rev. Lett. 80, 209 (1998)] this universality has been verified for a variety of continuous-time, periodic-boundary systems in the KPZ universality class. Here, we present a numerical method for directly examining the entire particle flux of the asymmetric exclusion process (ASEP), thus providing an alternative to more difficult cumulant ratios studies. Using this method, we find that the Derrida-Lebowitz scaling function (DLSF) properly characterizes the large-system-size limit (N→∞) of a single-particle discrete time system, even in the case of very small system sizes (N⩽22) . This fact allows us to not only verify that the DLSF properly characterizes multiple-particle discrete-time asymmetric exclusion processes, but also provides a way to numerically solve for quantities of interest, such as the particle hopping flux. This method can thus serve to further increase the ease and accessibility of studies involving even more challenging dynamics, such as the open-boundary ASEP.
Modeling of light scattering by icy bodies
NASA Astrophysics Data System (ADS)
Kolokolova, L.; Mackowski, D.; Pitman, K.; Verbiscer, A.; Buratti, B.; Momary, T.
2014-07-01
As a result of ground-based, space-based, and in-situ spacecraft mission observations, a great amount of photometric, polarimetric, and spectroscopic data of icy bodies (satellites of giant planets, Kuiper Belt objects, comet nuclei, and icy particles in cometary comae and rings) has been accumulated. These data have revealed fascinating light-scattering phenomena, such as the opposition surge resulting from coherent backscattering and shadow hiding and the negative polarization associated with them. Near-infrared (NIR) spectra of these bodies are especially informative as the depth, width, and shape of the absorption bands of ice are sensitive not only to the ice abundance but also to the size of icy grains. Numerous NIR spectra obtained by Cassini's Visual and Infrared Mapping Spectrometer (VIMS) have been used to map the microcharacteristics of the icy satellites [1] and rings of Saturn [2]. VIMS data have also permitted a study of the opposition surge for icy satellites of Saturn [3], showing that coherent backscattering affects not only brightness and polarization of icy bodies but also their spectra [4]. To study all of the light-scattering phenomena that affect the photopolarimetric and spectroscopic characteristics of icy bodies, including coherent backscattering, requires computer modeling that rigorously considers light scattering by a large number of densely packed small particles that form either layers (in the case of regolith) or big clusters (ring and comet particles) . Such opportunity has appeared recently with a development of a new version MSTM4 of the Multi-Sphere T-Matrix code [5]. Simulations of reflectance and absorbance spectra of a ''target'' (particle layer or cluster) require that the dimensions of the target be significantly larger than the wavelength, sphere radius, and layer thickness. For wavelength-sized spheres and packing fractions typical of regolith, targets can contain dozens of thousands of spheres that, with the original MSTM code, would require enormous computer RAM and CPU. MSTM4 adopts a discrete Fourier convolution (DFC), implemented using a fast Fourier transform (FFT), for the evaluation of the exciting field. This approach is very similar to that used in the discrete-dipole approximation (DDA) codes, with the difference that it considers multipole nature of the translation operators, and does not require that the sphere origins be located on a regular lattice. The MSTM4 code not only allows us to consider a larger number of constituent particles but also is about 100 times faster in wall-clock time than the original version of the MSTM code. Example of MSTM4 modeling is shown in the Figure.
Afshar, Yaser; Sbalzarini, Ivo F.
2016-01-01
Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Azevedo, Eduardo; Abbott, Stephen; Koskela, Tuomas
The XGC fusion gyrokinetic code combines state-of-the-art, portable computational and algorithmic technologies to enable complicated multiscale simulations of turbulence and transport dynamics in ITER edge plasma on the largest US open-science computer, the CRAY XK7 Titan, at its maximal heterogeneous capability, which have not been possible before due to a factor of over 10 shortage in the time-to-solution for less than 5 days of wall-clock time for one physics case. Frontier techniques such as nested OpenMP parallelism, adaptive parallel I/O, staging I/O and data reduction using dynamic and asynchronous applications interactions, dynamic repartitioning for balancing computational work in pushing particlesmore » and in grid related work, scalable and accurate discretization algorithms for non-linear Coulomb collisions, and communication-avoiding subcycling technology for pushing particles on both CPUs and GPUs are also utilized to dramatically improve the scalability and time-to-solution, hence enabling the difficult kinetic ITER edge simulation on a present-day leadership class computer.« less
Afshar, Yaser; Sbalzarini, Ivo F
2016-01-01
Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.
Soft-sphere simulations of a planar shock interaction with a granular bed
NASA Astrophysics Data System (ADS)
Stewart, Cameron; Balachandar, S.; McGrath, Thomas P.
2018-03-01
Here we consider the problem of shock propagation through a layer of spherical particles. A point particle force model is used to capture the shock-induced aerodynamic force acting upon the particles. The discrete element method (DEM) code liggghts is used to implement the shock-induced force as well as to capture the collisional forces within the system. A volume-fraction-dependent drag correction is applied using Voronoi tessellation to calculate the volume of fluid around each individual particle. A statistically stationary frame is chosen so that spatial and temporal averaging can be performed to calculate ensemble-averaged macroscopic quantities, such as the granular temperature. A parametric study is carried out by varying the coefficient of restitution for three sets of multiphase shock conditions. A self-similar profile is obtained for the granular temperature that is dependent on the coefficient of restitution. A traveling wave structure is observed in the particle concentration downstream of the shock and this instability arises from the volume-fraction-dependent drag force. The intensity of the traveling wave increases significantly as inelastic collisions are introduced. Downstream of the shock, the variance in Voronoi volume fraction is shown to have a strong dependence upon the coefficient of restitution, indicating clustering of particles induced by collisional dissipation. Statistics of the Voronoi volume are computed upstream and downstream of the shock and compared to theoretical results for randomly distributed hard spheres.
De Wilde, Juray; Richards, George; Benyahia, Sofiane
2016-05-13
Coupled discrete particle method – computational fluid dynamics simulations are carried out to demonstrate the potential of combined high-G-intensified gas-solids contact, gas-solids separation and segregation in a rotating fluidized bed in a static vortex chamber. A case study with two distinct types of particles is focused on. When feeding solids using a standard solids inlet design, a dense and uniform rotating fluidized bed is formed, guaranteeing intense gas-solids contact. The presence of both types of particles near the chimney region reduces, however, the strength of the central vortex and is detrimental for separation and segregation. Optimization of the solids inletmore » design is required, as illustrated by stopping the solids feeding. High-G separation and segregation of the batch of particles is demonstrated, as the strength of the central vortex is restored. The flexibility with respect to the gas flow rate of the bed density and uniformity and of the gas-solids separation and segregation is demonstrated, a unique feature of vortex chamber generated rotating fluidized beds. With the particles considered in this case study, turbulent dispersion by large eddies in the gas phase is shown to have only a minor impact on the height of the inner bed of small/light particles.« less
Linking snowflake microstructure to multi-frequency radar observations
NASA Astrophysics Data System (ADS)
Leinonen, J.; Moisseev, D.; Nousiainen, T.
2013-04-01
Spherical or spheroidal particle shape models are commonly used to calculate numerically the radar backscattering properties of aggregate snowflakes. A more complicated and computationally intensive approach is to use detailed models of snowflake structure together with numerical scattering models that can operate on arbitrary particle shapes. Recent studies have shown that there can be significant differences between the results of these approaches. In this paper, an analytical model, based on the Rayleigh-Gans scattering theory, is formulated to explain this discrepancy in terms of the effect of discrete ice crystals that constitute the snowflake. The ice crystals cause small-scale inhomogeneities whose effects can be understood through the density autocorrelation function of the particle mass, which the Rayleigh-Gans theory connects to the function that gives the radar reflectivity as a function of frequency. The derived model is a weighted sum of two Gaussian functions. A term that corresponds to the average shape of the particle, similar to that given by the spheroidal shape model, dominates at low frequencies. At high frequencies, that term vanishes and is gradually replaced by the effect of the ice crystal monomers. The autocorrelation-based description of snowflake microstructure appears to be sufficient for multi-frequency radar studies. The link between multi-frequency radar observations and the particle microstructure can thus be used to infer particle properties from the observations.
ERIC Educational Resources Information Center
Nosik, Melissa R.; Williams, W. Larry; Garrido, Natalia; Lee, Sarah
2013-01-01
In the current study, behavior skills training (BST) is compared to a computer based training package for teaching discrete trial instruction to staff, teaching an adult with autism. The computer based training package consisted of instructions, video modeling and feedback. BST consisted of instructions, modeling, rehearsal and feedback. Following…
Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff
2016-01-01
We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.
A discrete mesoscopic particle model of the mechanics of a multi-constituent arterial wall.
Witthoft, Alexandra; Yazdani, Alireza; Peng, Zhangli; Bellini, Chiara; Humphrey, Jay D; Karniadakis, George Em
2016-01-01
Blood vessels have unique properties that allow them to function together within a complex, self-regulating network. The contractile capacity of the wall combined with complex mechanical properties of the extracellular matrix enables vessels to adapt to changes in haemodynamic loading. Homogenized phenomenological and multi-constituent, structurally motivated continuum models have successfully captured these mechanical properties, but truly describing intricate microstructural details of the arterial wall may require a discrete framework. Such an approach would facilitate modelling interactions between or the separation of layers of the wall and would offer the advantage of seamless integration with discrete models of complex blood flow. We present a discrete particle model of a multi-constituent, nonlinearly elastic, anisotropic arterial wall, which we develop using the dissipative particle dynamics method. Mimicking basic features of the microstructure of the arterial wall, the model comprises an elastin matrix having isotropic nonlinear elastic properties plus anisotropic fibre reinforcement that represents the stiffer collagen fibres of the wall. These collagen fibres are distributed evenly and are oriented in four directions, symmetric to the vessel axis. Experimental results from biaxial mechanical tests of an artery are used for model validation, and a delamination test is simulated to demonstrate the new capabilities of the model. © 2016 The Author(s).
Comparison of particle tracking algorithms in commercial CFD packages: sedimentation and diffusion.
Robinson, Risa J; Snyder, Pam; Oldham, Michael J
2007-05-01
Computational fluid dynamic modeling software has enabled microdosimetry patterns of inhaled toxins and toxicants to be predicted and visualized, and is being used in inhalation toxicology and risk assessment. These predicted microdosimetry patterns in airway structures are derived from predicted airflow patterns within these airways and particle tracking algorithms used in computational fluid dynamics (CFD) software packages. Although these commercial CFD codes have been tested for accuracy under various conditions, they have not been well tested for respiratory flows in general. Nor has their particle tracking algorithm accuracy been well studied. In this study, three software packages, Fluent Discrete Phase Model (DPM), Fluent Fine Particle Model (FPM), and ANSYS CFX, were evaluated. Sedimentation and diffusion were each isolated in a straight tube geometry and tested for accuracy. A range of flow rates corresponding to adult low activity (minute ventilation = 10 L/min) and to heavy exertion (minute ventilation = 60 L/min) were tested by varying the range of dimensionless diffusion and sedimentation parameters found using the Weibel symmetric 23 generation lung morphology. Numerical results for fully developed parabolic and uniform (slip) profiles were compared respectively, to Pich (1972) and Yu (1977) analytical sedimentation solutions. Schum and Yeh (1980) equations for sedimentation were also compared. Numerical results for diffusional deposition were compared to analytical solutions of Ingham (1975) for parabolic and uniform profiles. Significant differences were found among the various CFD software packages and between numerical and analytical solutions. Therefore, it is prudent to validate CFD predictions against analytical solutions in idealized geometry before tackling the complex geometries of the respiratory tract.
Brehm, Laurel; Goldrick, Matthew
2017-10-01
The current work uses memory errors to examine the mental representation of verb-particle constructions (VPCs; e.g., make up the story, cut up the meat). Some evidence suggests that VPCs are represented by a cline in which the relationship between the VPC and its component elements ranges from highly transparent (cut up) to highly idiosyncratic (make up). Other evidence supports a multiple class representation, characterizing VPCs as belonging to discretely separated classes differing in semantic and syntactic structure. We outline a novel paradigm to investigate the representation of VPCs in which we elicit illusory conjunctions, or memory errors sensitive to syntactic structure. We then use a novel application of piecewise regression to demonstrate that the resulting error pattern follows a cline rather than discrete classes. A preregistered replication verifies these findings, and a final preregistered study verifies that these errors reflect syntactic structure. This provides evidence for gradient rather than discrete representations across levels of representation in language processing. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Determining Trajectory of Triboelectrically Charged Particles, Using Discrete Element Modeling
NASA Technical Reports Server (NTRS)
2008-01-01
The Kennedy Space Center (KSC) Electrostatics and Surface Physics Laboratory is participating in an Innovative Partnership Program (IPP) project with an industry partner to modify a commercial off-the-shelf simulation software product to treat the electrodynamics of particulate systems. Discrete element modeling (DEM) is a numerical technique that can track the dynamics of particle systems. This technique, which was introduced in 1979 for analysis of rock mechanics, was recently refined to include the contact force interaction of particles with arbitrary surfaces and moving machinery. In our work, we endeavor to incorporate electrostatic forces into the DEM calculations to enhance the fidelity of the software and its applicability to (1) particle processes, such as electrophotography, that are greatly affected by electrostatic forces, (2) grain and dust transport, and (3) the study of lunar and Martian regoliths.
Discrete element modeling of shock-induced particle jetting
NASA Astrophysics Data System (ADS)
Xue, Kun; Cui, Haoran
2018-05-01
The dispersal of particle shell or ring by divergent impulsive loads takes the form of coherent particle jets with the dimensions several orders larger than that of constituent grain. Particle-scale simulations based on the discrete element method have been carried out to reveal the evolution of jets in semi-two-dimensional rings before they burst out of the external surface. We identify two key events which substantially change the resulted jetting pattern, specifically, the annihilation of incipient jets and the tip-slipping of jets, which become active in different phases of jet evolution. Parametric investigations have been done to assess the correlations between the jetting pattern and a variety of structural parameters. Overpressure, the internal and outer diameters of ring as well as the packing density are found to have effects on the jet evolution with different relative importance.
Discrete Tchebycheff orthonormal polynomials and applications
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
Discrete Tchebycheff orthonormal polynomials offer a convenient way to make least squares polynomial fits of uniformly spaced discrete data. Computer programs to do so are simple and fast, and appear to be less affected by computer roundoff error, for the higher order fits, than conventional least squares programs. They are useful for any application of polynomial least squares fits: approximation of mathematical functions, noise analysis of radar data, and real time smoothing of noisy data, to name a few.
Combinatorial Reliability and Repair
1992-07-01
Press, Oxford, 1987. [2] G. Gordon and L. Traldi, Generalized activities and the Tutte polynomial, Discrete Math . 85 (1990), 167-176. [3] A. B. Huseby, A...Chromatic polynomials and network reliability, Discrete Math . 67 (1987), 57-79. [7] A. Satayanarayana and R. K. Wood, A linear-time algorithm for comput- ing...K-terminal reliability in series-parallel networks, SIAM J. Comput. 14 (1985), 818-832. [8] L. Traldi, Generalized activities and K-terminal reliability, Discrete Math . 96 (1991), 131-149. 4
2016-01-05
discretizations . We maintain that what is clear at the mathematical level should be equally clear in computation. In this small STIR project, we separate the...concerns of describing and discretizing such models by defining an input language representing PDE, including steady-state and tran- sient, linear and...solvers, such as [8, 9], focused on the solvers themselves and particular families of discretizations (e. g. finite elements), and now it is natural to
Meso-scale framework for modeling granular material using computed tomography
Turner, Anne K.; Kim, Felix H.; Penumadu, Dayakar; ...
2016-03-17
Numerical modeling of unconsolidated granular materials is comprised of multiple nonlinear phenomena. Accurately capturing these phenomena, including grain deformation and intergranular forces depends on resolving contact regions several orders of magnitude smaller than the grain size. Here, we investigate a method for capturing the morphology of the individual particles using computed X-ray and neutron tomography, which allows for accurate characterization of the interaction between grains. The ability of these numerical approaches to determine stress concentrations at grain contacts is important in order to capture catastrophic splitting of individual grains, which has been shown to play a key role in themore » plastic behavior of the granular material on the continuum level. Discretization approaches, including mesh refinement and finite element type selection are presented to capture high stress concentrations at contact points between grains. The effect of a grain’s coordination number on the stress concentrations is also investigated.« less
Discrete-Roughness-Element-Enhanced Swept-Wing Natural Laminar Flow at High Reynolds Numbers
NASA Technical Reports Server (NTRS)
Malik, Mujeeb; Liao, Wei; Li, Fei; Choudhari, Meelan
2015-01-01
Nonlinear parabolized stability equations and secondary-instability analyses are used to provide a computational assessment of the potential use of the discrete-roughness-element technology for extending swept-wing natural laminar flow at chord Reynolds numbers relevant to transport aircraft. Computations performed for the boundary layer on a natural-laminar-flow airfoil with a leading-edge sweep angle of 34.6 deg, freestream Mach number of 0.75, and chord Reynolds numbers of 17 × 10(exp 6), 24 × 10(exp 6), and 30 × 10(exp 6) suggest that discrete roughness elements could delay laminar-turbulent transition by about 20% when transition is caused by stationary crossflow disturbances. Computations show that the introduction of small-wavelength stationary crossflow disturbances (i.e., discrete roughness element) also suppresses the growth of most amplified traveling crossflow disturbances.
Nosik, Melissa R; Williams, W Larry; Garrido, Natalia; Lee, Sarah
2013-01-01
In the current study, behavior skills training (BST) is compared to a computer based training package for teaching discrete trial instruction to staff, teaching an adult with autism. The computer based training package consisted of instructions, video modeling and feedback. BST consisted of instructions, modeling, rehearsal and feedback. Following training, participants were evaluated in terms of their accuracy on completing critical skills for running a discrete trial program. Six participants completed training; three received behavior skills training and three received the computer based training. Participants in the BST group performed better overall after training and during six week probes than those in the computer based training group. There were differences across both groups between research assistant and natural environment competency levels. Copyright © 2012 Elsevier Ltd. All rights reserved.
Alidousti, Hamidreza; Taylor, Mark; Bressloff, Neil W
2014-04-01
In total hip replacement (THR), wear particles play a significant role in osteolysis and have been observed in locations as remote as the tip of femoral stem. However, there is no clear understanding of the factors and mechanisms causing, or contributing to particle migration to the periprosthetic tissue. Interfacial gaps provide a route for particle laden joint fluid to transport wear particles to the periprosthetic tissue and cause osteolysis. It is likely that capsular pressure, gap dimensions and micromotion of the gap during cyclic loading of an implant, play defining roles to facilitate particle migration. In order to obtain a better understanding of the above mechanisms and factors, transient two-dimensional computational fluid dynamic simulations have been performed for the flow in the lateral side of a cementless stem-femur system including the joint capsule, a gap in communication with the capsule and the surrounding bone. A discrete phase model to describe particle motion has been employed. Key findings from these simulations include: (1) Particles were shown to enter the periprosthetic tissue along the entire length of the gap but with higher concentrations at both proximal and distal ends of the gap and a maximum rate of particle accumulation in the distal regions. (2) High capsular pressure, rather than gap micromotion, has been shown to be the main driving force for particle migration to periprosthetic tissue. (3) Implant micromotion was shown to pump out rather than draw in particles to the interfacial gaps. (4) Particle concentrations are consistent with known distributions of (i) focal osteolysis at the distal end of the gap and (ii) linear osteolysis along the entire gap length. Copyright © 2014 Elsevier Ltd. All rights reserved.
Image compression system and method having optimized quantization tables
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)
1998-01-01
A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.
NASA Astrophysics Data System (ADS)
Bonaventura, Luca; Fernández-Nieto, Enrique D.; Garres-Díaz, José; Narbona-Reina, Gladys
2018-07-01
We propose an extension of the discretization approaches for multilayer shallow water models, aimed at making them more flexible and efficient for realistic applications to coastal flows. A novel discretization approach is proposed, in which the number of vertical layers and their distribution are allowed to change in different regions of the computational domain. Furthermore, semi-implicit schemes are employed for the time discretization, leading to a significant efficiency improvement for subcritical regimes. We show that, in the typical regimes in which the application of multilayer shallow water models is justified, the resulting discretization does not introduce any major spurious feature and allows again to reduce substantially the computational cost in areas with complex bathymetry. As an example of the potential of the proposed technique, an application to a sediment transport problem is presented, showing a remarkable improvement with respect to standard discretization approaches.
Discrete-time model reduction in limited frequency ranges
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Juang, Jer-Nan; Longman, Richard W.
1991-01-01
A mathematical formulation for model reduction of discrete time systems such that the reduced order model represents the system in a particular frequency range is discussed. The algorithm transforms the full order system into balanced coordinates using frequency weighted discrete controllability and observability grammians. In this form a criterion is derived to guide truncation of states based on their contribution to the frequency range of interest. Minimization of the criterion is accomplished without need for numerical optimization. Balancing requires the computation of discrete frequency weighted grammians. Close form solutions for the computation of frequency weighted grammians are developed. Numerical examples are discussed to demonstrate the algorithm.
Mathematics and Computer Science: Exploring a Symbiotic Relationship
ERIC Educational Resources Information Center
Bravaco, Ralph; Simonson, Shai
2004-01-01
This paper describes a "learning community" designed for sophomore computer science majors who are simultaneously studying discrete mathematics. The learning community consists of three courses: Discrete Mathematics, Data Structures and an Integrative Seminar/Lab. The seminar functions as a link that integrates the two disciplines. Participation…
ADAM: analysis of discrete models of biological systems using computer algebra.
Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard
2011-07-20
Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.
NASA Astrophysics Data System (ADS)
Zhong, Rong-Xuan; Huang, Nan; Li, Huang-Wu; He, He-Xiang; Lü, Jian-Tao; Huang, Chun-Qing; Chen, Zhao-Pin
2018-04-01
We numerically and analytically investigate the formations and features of two-dimensional discrete Bose-Einstein condensate solitons, which are constructed by quadrupole-quadrupole interactional particles trapped in the tunable anisotropic discrete optical lattices. The square optical lattices in the model can be formed by two pairs of interfering plane waves with different intensities. Two hopping rates of the particles in the orthogonal directions are different, which gives rise to a linear anisotropic system. We find that if all of the pairs of dipole and anti-dipole are perpendicular to the lattice panel and the line connecting the dipole and anti-dipole which compose the quadrupole is parallel to horizontal direction, both the linear anisotropy and the nonlocal nonlinear one can strongly influence the formations of the solitons. There exist three patterns of stable solitons, namely horizontal elongation quasi-one-dimensional discrete solitons, disk-shape isotropic pattern solitons and vertical elongation quasi-continuous solitons. We systematically demonstrate the relationships of chemical potential, size and shape of the soliton with its total norm and vertical hopping rate and analytically reveal the linear dispersion relation for quasi-one-dimensional discrete solitons.
Monte-Carlo Simulations of Drug Delivery on Biofilms
NASA Astrophysics Data System (ADS)
Buldum, Alper; Simpson, Andrew
2013-03-01
The focus of this work is on biofilms that grow in the lungs of cystic fibrosis (CF) patients. A discrete model which describes the nutrient and biomass as discrete particles is created. Diffusion of the nutrient, consumption of the nutrient by microbial particles, and growth and decay of microbial particles are simulated using stochastic processes. Our model extends the complexity of the biofilm system by including the conversion and reversion of living bacteria into a hibernated state, known as persister bacteria. Another new contribution is the inclusion of antimicrobial in two forms: an aqueous solution and encapsulated in biodegradable nanoparticles. The bacteria population growth and spatial variation of drugs and their effectiveness are investigated in this work. Supported by NIH
Numerical studies of a model fermion-boson system
NASA Astrophysics Data System (ADS)
Cheng, T.; Gospodarczyk, E. R.; Su, Q.; Grobe, R.
2010-02-01
We study the spectral and dynamical properties of a simplified model system of interacting fermions and bosons. The spatial discretization and an effective truncation of the Hilbert space permit us to compute the distribution of the bare fermions and bosons in the energy eigenstates of the coupled system. These states represent the physical particles and are used to examine the validity of the analytical predictions by perturbation theory and by the Greenberg-Schweber approximation that assumes all fermions are at rest. As an example of our numerical framework, we examine how a bare electron can trigger the creation of a cloud of virtual bosons around. We relate this cloud to the properties of the associated energy eigenstates.
Modeling of brittle-viscous flow using discrete particles
NASA Astrophysics Data System (ADS)
Thordén Haug, Øystein; Barabasch, Jessica; Virgo, Simon; Souche, Alban; Galland, Olivier; Mair, Karen; Abe, Steffen; Urai, Janos L.
2017-04-01
Many geological processes involve both viscous flow and brittle fractures, e.g. boudinage, folding and magmatic intrusions. Numerical modeling of such viscous-brittle materials poses challenges: one has to account for the discrete fracturing, the continuous viscous flow, the coupling between them, and potential pressure dependence of the flow. The Discrete Element Method (DEM) is a numerical technique, widely used for studying fracture of geomaterials. However, the implementation of viscous fluid flow in discrete element models is not trivial. In this study, we model quasi-viscous fluid flow behavior using Esys-Particle software (Abe et al., 2004). We build on the methodology of Abe and Urai (2012) where a combination of elastic repulsion and dashpot interactions between the discrete particles is implemented. Several benchmarks are presented to illustrate the material properties. Here, we present extensive, systematic material tests to characterize the rheology of quasi-viscous DEM particle packing. We present two tests: a simple shear test and a channel flow test, both in 2D and 3D. In the simple shear tests, simulations were performed in a box, where the upper wall is moved with a constant velocity in the x-direction, causing shear deformation of the particle assemblage. Here, the boundary conditions are periodic on the sides, with constant forces on the upper and lower walls. In the channel flow tests, a piston pushes a sample through a channel by Poisseuille flow. For both setups, we present the resulting stress-strain relationships over a range of material parameters, confining stress and strain rate. Results show power-law dependence between stress and strain rate, with a non-linear dependence on confining force. The material is strain softening under some conditions (which). Additionally, volumetric strain can be dilatant or compactant, depending on porosity, confining pressure and strain rate. Constitutive relations are implemented in a way that limits the range of viscosities. For identical pressure and strain rate, an order of magnitude range in viscosity can be investigated. The extensive material testing indicates that DEM particles interacting by a combination of elastic repulsion and dashpots can be used to model viscous flows. This allows us to exploit the fracturing capabilities of the discrete element methods and study systems that involve both viscous flow and brittle fracturing. However, the small viscosity range achievable using this approach does constraint the applicability for systems where larger viscosity ranges are required, such as folding of viscous layers of contrasting viscosities. References: Abe, S., Place, D., & Mora, P. (2004). A parallel implementation of the lattice solid model for the simulation of rock mechanics and earthquake dynamics. PAGEOPH, 161(11-12), 2265-2277. http://doi.org/10.1007/s00024-004-2562-x Abe, S., and J. L. Urai (2012), Discrete element modeling of boudinage: Insights on rock rheology, matrix flow, and evolution of geometry, JGR., 117, B01407, doi:10.1029/2011JB00855
Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions.
ERIC Educational Resources Information Center
Holland, Paul W.; Thayer, Dorothy T.
2000-01-01
Applied the theory of exponential families of distributions to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. Considers efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and computationally efficient…
Discrete Mathematics Course Supported by CAS MATHEMATICA
ERIC Educational Resources Information Center
Ivanov, O. A.; Ivanova, V. V.; Saltan, A. A.
2017-01-01
In this paper, we discuss examples of assignments for a course in discrete mathematics for undergraduate students majoring in business informatics. We consider several problems with computer-based solutions and discuss general strategies for using computers in teaching mathematics and its applications. In order to evaluate the effectiveness of our…
Gauge properties of the guiding center variational symplectic integrator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Squire, J.; Tang, W. M.; Qin, H.
Variational symplectic algorithms have recently been developed for carrying out long-time simulation of charged particles in magnetic fields [H. Qin and X. Guan, Phys. Rev. Lett. 100, 035006 (2008); H. Qin, X. Guan, and W. Tang, Phys. Plasmas (2009); J. Li, H. Qin, Z. Pu, L. Xie, and S. Fu, Phys. Plasmas 18, 052902 (2011)]. As a direct consequence of their derivation from a discrete variational principle, these algorithms have very good long-time energy conservation, as well as exactly preserving discrete momenta. We present stability results for these algorithms, focusing on understanding how explicit variational integrators can be designed formore » this type of system. It is found that for explicit algorithms, an instability arises because the discrete symplectic structure does not become the continuous structure in the t{yields}0 limit. We examine how a generalized gauge transformation can be used to put the Lagrangian in the 'antisymmetric discretization gauge,' in which the discrete symplectic structure has the correct form, thus eliminating the numerical instability. Finally, it is noted that the variational guiding center algorithms are not electromagnetically gauge invariant. By designing a model discrete Lagrangian, we show that the algorithms are approximately gauge invariant as long as A and {phi} are relatively smooth. A gauge invariant discrete Lagrangian is very important in a variational particle-in-cell algorithm where it ensures current continuity and preservation of Gauss's law [J. Squire, H. Qin, and W. Tang (to be published)].« less
Estimation for general birth-death processes
Crawford, Forrest W.; Minin, Vladimir N.; Suchard, Marc A.
2013-01-01
Birth-death processes (BDPs) are continuous-time Markov chains that track the number of “particles” in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution. PMID:25328261
Granular materials interacting with thin flexible rods
NASA Astrophysics Data System (ADS)
Neto, Alfredo Gay; Campello, Eduardo M. B.
2017-04-01
In this work, we develop a computational model for the simulation of problems wherein granular materials interact with thin flexible rods. We treat granular materials as a collection of spherical particles following a discrete element method (DEM) approach, while flexible rods are described by a large deformation finite element (FEM) rod formulation. Grain-to-grain, grain-to-rod, and rod-to-rod contacts are fully permitted and resolved. A simple and efficient strategy is proposed for coupling the motion of the two types (discrete and continuum) of materials within an iterative time-stepping solution scheme. Implementation details are shown and discussed. Validity and applicability of the model are assessed by means of a few numerical examples. We believe that robust, efficiently coupled DEM-FEM schemes can be a useful tool to the simulation of problems wherein granular materials interact with thin flexible rods, such as (but not limited to) bombardment of grains on beam structures, flow of granular materials over surfaces covered by threads of hair in many biological processes, flow of grains through filters and strainers in various industrial segregation processes, and many others.
Cohen, D; Stamnes, S; Tanikawa, T; Sommersten, E R; Stamnes, J J; Lotsberg, J K; Stamnes, K
2013-04-22
A comparison is presented of two different methods for polarized radiative transfer in coupled media consisting of two adjacent slabs with different refractive indices, each slab being a stratified medium with no change in optical properties except in the direction of stratification. One of the methods is based on solving the integro-differential radiative transfer equation for the two coupled slabs using the discrete ordinate approximation. The other method is based on probabilistic and statistical concepts and simulates the propagation of polarized light using the Monte Carlo approach. The emphasis is on non-Rayleigh scattering for particles in the Mie regime. Comparisons with benchmark results available for a slab with constant refractive index show that both methods reproduce these benchmark results when the refractive index is set to be the same in the two slabs. Computed results for test cases with coupling (different refractive indices in the two slabs) show that the two methods produce essentially identical results for identical input in terms of absorption and scattering coefficients and scattering phase matrices.
Xu, Yupeng; Musser, Jordan; Li, Tingwen; ...
2017-07-22
It has been reported experimentally that granular particles can climb along a vertically vibrating tube partially inserted inside a granular silo. Here, we use the Discrete Element Method (DEM) available in the Multiphase Flow with Interphase eXchanges (MFIX) code to investigate this phenomenon. By tracking the movement of individual particles, the climbing mechanism was illustrated and analyzed. The numerical results show that a sufficiently high vibration strength is needed to form a low solids volume fraction region inside the lower end of the vibrating tube, a dense region in the middle of the tube, and to bring the particles outsidemore » from the top layers down to fill in the void. The results also show that particle compaction in the middle section of the tube is the main cause of the climbing. Consequently, varying parameters which influence the compacted region, such as the restitution coefficient, change the climbing height.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Yupeng; Musser, Jordan; Li, Tingwen
It has been reported experimentally that granular particles can climb along a vertically vibrating tube partially inserted inside a granular silo. Here, we use the Discrete Element Method (DEM) available in the Multiphase Flow with Interphase eXchanges (MFIX) code to investigate this phenomenon. By tracking the movement of individual particles, the climbing mechanism was illustrated and analyzed. The numerical results show that a sufficiently high vibration strength is needed to form a low solids volume fraction region inside the lower end of the vibrating tube, a dense region in the middle of the tube, and to bring the particles outsidemore » from the top layers down to fill in the void. The results also show that particle compaction in the middle section of the tube is the main cause of the climbing. Consequently, varying parameters which influence the compacted region, such as the restitution coefficient, change the climbing height.« less
A new PIC noise reduction technique
NASA Astrophysics Data System (ADS)
Barnes, D. C.
2014-10-01
Numerical solution of the Vlasov equation is considered in a general situation in which there is an underlying static solution (equilibrium). There are no further assumptions about dimensionality, smallenss of orbits, or disparate time scales. The semi-characteristic (SC) method for Vlasov solution is described. The usual characteristics of the equation, which are the single particle orbits, are modified in such a way that the equilibrium phase-space flow is removed. In this way, the shot noise introduced by the usual discrete particle representation of the equilibrium is static in time and can be removed completely by subtraction. An almost exact algorithm for this is based on the observation that a (infinitesimal or) discrete time step of any equilibrium MC realization is again a realization of the equilibrium, building up strings of associated simulation particles. In this way, the only added discretization error arises from the need to extrapolate backward in time the chain end points one dt using a canonical transformation. Previously developed energy-conserving time-implicit methods are applied without modification. 1D ES examples of Landau damping and velocity-space instability are given to illustrate the method.
Diffusion of multiple species with excluded-volume effects.
Bruna, Maria; Chapman, S Jonathan
2012-11-28
Stochastic models of diffusion with excluded-volume effects are used to model many biological and physical systems at a discrete level. The average properties of the population may be described by a continuum model based on partial differential equations. In this paper we consider multiple interacting subpopulations/species and study how the inter-species competition emerges at the population level. Each individual is described as a finite-size hard core interacting particle undergoing brownian motion. The link between the discrete stochastic equations of motion and the continuum model is considered systematically using the method of matched asymptotic expansions. The system for two species leads to a nonlinear cross-diffusion system for each subpopulation, which captures the enhancement of the effective diffusion rate due to excluded-volume interactions between particles of the same species, and the diminishment due to particles of the other species. This model can explain two alternative notions of the diffusion coefficient that are often confounded, namely collective diffusion and self-diffusion. Simulations of the discrete system show good agreement with the analytic results.
NASA Astrophysics Data System (ADS)
Tecla Falconi, Marta; von Lerber, Annakaisa; Ori, Davide; Silvio Marzano, Frank; Moisseev, Dmitri
2018-05-01
Radar-based snowfall intensity retrieval is investigated at centimeter and millimeter wavelengths using co-located ground-based multi-frequency radar and video-disdrometer observations. Using data from four snowfall events, recorded during the Biogenic Aerosols Effects on Clouds and Climate (BAECC) campaign in Finland, measurements of liquid-water-equivalent snowfall rate S are correlated to radar equivalent reflectivity factors Ze, measured by the Atmospheric Radiation Measurement (ARM) cloud radars operating at X, Ka and W frequency bands. From these combined observations, power-law Ze-S relationships are derived for all three frequencies considering the influence of riming. Using microwave radiometer observations of liquid water path, the measured precipitation is divided into lightly, moderately and heavily rimed snow. Interestingly lightly rimed snow events show a spectrally distinct signature of Ze-S with respect to moderately or heavily rimed snow cases. In order to understand the connection between snowflake microphysical and multi-frequency backscattering properties, numerical simulations are performed by using the particle size distribution provided by the in situ video disdrometer and retrieved ice particle masses. The latter are carried out by using both the T-matrix method (TMM) applied to soft-spheroid particle models with different aspect ratios and exploiting a pre-computed discrete dipole approximation (DDA) database for rimed aggregates. Based on the presented results, it is concluded that the soft-spheroid approximation can be adopted to explain the observed multi-frequency Ze-S relations if a proper spheroid aspect ratio is selected. The latter may depend on the degree of riming in snowfall. A further analysis of the backscattering simulations reveals that TMM cross sections are higher than the DDA ones for small ice particles, but lower for larger particles. The differences of computed cross sections for larger and smaller particles are compensating for each other. This may explain why the soft-spheroid approximation is satisfactory for radar reflectivity simulations under study.
Analysis of Iron in Lawn Fertilizer: A Sampling Study
ERIC Educational Resources Information Center
Jeannot, Michael A.
2006-01-01
An experiment is described which uses a real-world sample of lawn fertilizer in a simple exercise to illustrate problems associated with the sampling step of a chemical analysis. A mixed-particle fertilizer containing discrete particles of iron oxide (magnetite, Fe[subscript 3]O[subscript 4]) mixed with other particles provides an excellent…
Discrete structure of an RNA folding intermediate revealed by cryo-electron microscopy.
Baird, Nathan J; Ludtke, Steven J; Khant, Htet; Chiu, Wah; Pan, Tao; Sosnick, Tobin R
2010-11-24
RNA folding occurs via a series of transitions between metastable intermediate states. It is unknown whether folding intermediates are discrete structures folding along defined pathways or heterogeneous ensembles folding along broad landscapes. We use cryo-electron microscopy and single-particle image reconstruction to determine the structure of the major folding intermediate of the specificity domain of a ribonuclease P ribozyme. Our results support the existence of a discrete conformation for this folding intermediate.
Numerical modeling for dilute and dense sprays
NASA Technical Reports Server (NTRS)
Chen, C. P.; Kim, Y. M.; Shang, H. M.; Ziebarth, J. P.; Wang, T. S.
1992-01-01
We have successfully implemented a numerical model for spray-combustion calculations. In this model, the governing gas-phase equations in Eulerian coordinate are solved by a time-marching multiple pressure correction procedure based on the operator-splitting technique. The droplet-phase equations in Lagrangian coordinate are solved by a stochastic discrete particle technique. In order to simplify the calculation procedure for the circulating droplets, the effective conductivity model is utilized. The k-epsilon models are utilized to characterize the time and length scales of the gas phase in conjunction with turbulent modulation by droplets and droplet dispersion by turbulence. This method entails random sampling of instantaneous gas flow properties and the stochastic process requires a large number of computational parcels to produce the satisfactory dispersion distributions even for rather dilute sprays. Two major improvements in spray combustion modelings were made. Firstly, we have developed a probability density function approach in multidimensional space to represent a specific computational particle. Secondly, we incorporate the Taylor Analogy Breakup (TAB) model for handling the dense spray effects. This breakup model is based on the reasonable assumption that atomization and drop breakup are indistinguishable processes within a dense spray near the nozzle exit. Accordingly, atomization is prescribed by injecting drops which have a characteristic size equal to the nozzle exit diameter. Example problems include the nearly homogeneous and inhomogeneous turbulent particle dispersion, and the non-evaporating, evaporating, and burning dense sprays. Comparison with experimental data will be discussed in detail.
Yates, Christian A; Flegg, Mark B
2015-05-06
Spatial reaction-diffusion models have been employed to describe many emergent phenomena in biological systems. The modelling technique most commonly adopted in the literature implements systems of partial differential equations (PDEs), which assumes there are sufficient densities of particles that a continuum approximation is valid. However, owing to recent advances in computational power, the simulation and therefore postulation, of computationally intensive individual-based models has become a popular way to investigate the effects of noise in reaction-diffusion systems in which regions of low copy numbers exist. The specific stochastic models with which we shall be concerned in this manuscript are referred to as 'compartment-based' or 'on-lattice'. These models are characterized by a discretization of the computational domain into a grid/lattice of 'compartments'. Within each compartment, particles are assumed to be well mixed and are permitted to react with other particles within their compartment or to transfer between neighbouring compartments. Stochastic models provide accuracy, but at the cost of significant computational resources. For models that have regions of both low and high concentrations, it is often desirable, for reasons of efficiency, to employ coupled multi-scale modelling paradigms. In this work, we develop two hybrid algorithms in which a PDE in one region of the domain is coupled to a compartment-based model in the other. Rather than attempting to balance average fluxes, our algorithms answer a more fundamental question: 'how are individual particles transported between the vastly different model descriptions?' First, we present an algorithm derived by carefully redefining the continuous PDE concentration as a probability distribution. While this first algorithm shows very strong convergence to analytical solutions of test problems, it can be cumbersome to simulate. Our second algorithm is a simplified and more efficient implementation of the first, it is derived in the continuum limit over the PDE region alone. We test our hybrid methods for functionality and accuracy in a variety of different scenarios by comparing the averaged simulations with analytical solutions of PDEs for mean concentrations. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
NASA Technical Reports Server (NTRS)
Russell, Louis M.; Thurman, Douglas R.; Poinsatte, Philip E.; Hippensteele, Steven A.
1998-01-01
An experimental study was made to obtain quantitative information on heat transfer, flow, and pressure distribution in a branched duct test section that had several significant features of an internal cooling passage of a turbine blade. The objective of this study was to generate a set of experimental data that could be used for validation of computer codes that would be used to model internal cooling. Surface heat transfer coefficients and entrance flow conditions were measured at nominal entrance Reynolds numbers of 45,000, 335,000, and 726,000. Heat transfer data were obtained by using a steady-state technique in which an Inconel heater sheet is attached to the surface and coated with liquid crystals. Visual and quantitative flow-field data from particle image velocimetry measurements for a plane at midchannel height for a Reynolds number of 45,000 were also obtained. The flow was seeded with polystyrene particles and illuminated by a laser light sheet. Pressure distribution measurements were made both on the surface with discrete holes and in the flow field with a total pressure probe. The flow-field measurements yielded flow-field velocities at selected locations. A relatively new method, pressure sensitive paint, was also used to measure surface pressure distribution. The pressure paint data obtained at Reynolds numbers of 335,000 and 726,000 compared well with the more standard method of measuring pressures by using discrete holes.
Verma, Vikrant; Li, Tingwen; De Wilde, Juray
2017-05-26
Vortex chambers allow the generation of rotating fluidized beds, offering high-G intensified gas-solid contact, gas-solids separation and solids-solids segregation. Focusing on binary particle mixtures and fixing the density and diameter of the heavy/large particles, transient batch CFD-coarse-grained DPM simulations were carried out with varying densities or sizes of the light/small particles to evaluate to what extent combining these three functionalities is possible within a vortex chamber of given design. Both the rate and quality of segregation were analyzed. Within a relatively wide density and size range, fast and efficient segregation takes place, with an inner and slower rotating bed ofmore » the lighter/small particles forming within the outer and faster rotating bed of the heavier/large particles. Simulations show that the contamination of the outer bed with lighter particles occurs more easily than contamination of the inner bed with heavier particles and increases with decreasing difference in size or density of the particles. Bubbling in the inner bed is observed with an inner bed of very low density or small particles. Porosity plots show that vortex chambers with a sufficient number of gas inlet slots have to be used to guarantee a uniform gas distribution and particle bed. Lastly, the flexibility of particle segregation in vortex chambers with respect to the gas flow rate is demonstrated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verma, Vikrant; Li, Tingwen; De Wilde, Juray
Vortex chambers allow the generation of rotating fluidized beds, offering high-G intensified gas-solid contact, gas-solids separation and solids-solids segregation. Focusing on binary particle mixtures and fixing the density and diameter of the heavy/large particles, transient batch CFD-coarse-grained DPM simulations were carried out with varying densities or sizes of the light/small particles to evaluate to what extent combining these three functionalities is possible within a vortex chamber of given design. Both the rate and quality of segregation were analyzed. Within a relatively wide density and size range, fast and efficient segregation takes place, with an inner and slower rotating bed ofmore » the lighter/small particles forming within the outer and faster rotating bed of the heavier/large particles. Simulations show that the contamination of the outer bed with lighter particles occurs more easily than contamination of the inner bed with heavier particles and increases with decreasing difference in size or density of the particles. Bubbling in the inner bed is observed with an inner bed of very low density or small particles. Porosity plots show that vortex chambers with a sufficient number of gas inlet slots have to be used to guarantee a uniform gas distribution and particle bed. Lastly, the flexibility of particle segregation in vortex chambers with respect to the gas flow rate is demonstrated.« less
Xia, Kelin
2017-12-20
In this paper, a multiscale virtual particle based elastic network model (MVP-ENM) is proposed for the normal mode analysis of large-sized biomolecules. The multiscale virtual particle (MVP) model is proposed for the discretization of biomolecular density data. With this model, large-sized biomolecular structures can be coarse-grained into virtual particles such that a balance between model accuracy and computational cost can be achieved. An elastic network is constructed by assuming "connections" between virtual particles. The connection is described by a special harmonic potential function, which considers the influence from both the mass distributions and distance relations of the virtual particles. Two independent models, i.e., the multiscale virtual particle based Gaussian network model (MVP-GNM) and the multiscale virtual particle based anisotropic network model (MVP-ANM), are proposed. It has been found that in the Debye-Waller factor (B-factor) prediction, the results from our MVP-GNM with a high resolution are as good as the ones from GNM. Even with low resolutions, our MVP-GNM can still capture the global behavior of the B-factor very well with mismatches predominantly from the regions with large B-factor values. Further, it has been demonstrated that the low-frequency eigenmodes from our MVP-ANM are highly consistent with the ones from ANM even with very low resolutions and a coarse grid. Finally, the great advantage of MVP-ANM model for large-sized biomolecules has been demonstrated by using two poliovirus virus structures. The paper ends with a conclusion.
A study on the gas-solid particle flows in a needle-free drug delivery device
NASA Astrophysics Data System (ADS)
Rasel, Md. Alim Iftekhar; Taher, Md. Abu; Kim, H. D.
2013-08-01
Different systems have been used over the years to deliver drug particles to the human skin for pharmaceutical effect. Research has been done to improve the performance and flexibility of these systems. In recent years a unique system called the transdermal drug delivery has been developed. Transdermal drug delivery opened a new door in the field of drug delivery as it is more flexible and offers better performance than the conventional systems. The principle of this system is to accelerate drug particles with a high speed gas flow. Among different transdermal drug delivery systems we will concentrate on the contour shock tube system in this paper. A contoured shock tube is consists of a rupture chamber, a shock tube and a supersonic nozzle section. The drug particles are retained between a set of bursting diaphragm. When the diaphragm is ruptured at a certain pressure, a high speed unsteady flow is initiated through the shock tube which accelerates the particles. Computational fluid dynamics is used to simulate and analyze the flow field. The DPM (discrete phase method) is used to model the particle flow. As an unsteady flow is initiated though the shock tube the drag correlation proposed by Igra et al is used other than the standard drag correlation. The particle velocities at different sections including the nozzle exit are investigated under different operating conditions. Static pressure histories in different sections in the shock tube are investigated to analyze the flow field. The important aspects of the gas and particle dynamics in the shock tube are discussed and analyzed in details.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Zakharova, Nadia T.
1999-01-01
Many remote sensing applications rely on accurate knowledge of the bidirectional reflection function (BRF) of surfaces composed of discrete, randomly positioned scattering particles. Theoretical computations of BRFs for plane-parallel particulate layers are usually reduced to solving the radiative transfer equation (RTE) using one of existing exact or approximate techniques. Since semi-empirical approximate approaches are notorious for their low accuracy, violation of the energy conservation law, and ability to produce unphysical results, the use of numerically exact solutions of RTE has gained justified popularity. For example, the computation of BRFs for macroscopically flat particulate surfaces in many geophysical publications is based on the adding-doubling (AD) and discrete ordinate (DO) methods. A further saving of computer resources can be achieved by using a more efficient technique to solve the plane-parallel RTE than the AD and DO methods. Since many natural particulate surfaces can be well represented by the model of an optically semi-infinite, homogeneous scattering layer, one can find the BRF directly by solving the Ambartsumian's nonlinear integral equation using a simple iterative technique. In this way, the computation of the internal radiation field is avoided and the computer code becomes highly efficient and very accurate and compact. Furthermore, the BRF thus obtained fully obeys the fundamental physical laws of energy conservation and reciprocity. In this paper, we discuss numerical aspects and the computer implementation of this technique, examine the applicability of the Henyey-Greenstein phase function and the sigma-Eddington approximation in BRF and flux calculations, and describe sample applications demonstrating the potential effect of particle shape on the bidirectional reflectance of flat regolith surfaces. Although the effects of packing density and coherent backscattering are currently neglected, they can also be incorporated. The FORTRAN implementation of the technique is available on the World Wide Web, and can be applied to a wide range of remote sensing problems. BRF computations for undulated (macroscopically rough) surfaces are more complicated and often rely on time consuming Monte Carlo procedures. This approach is especially inefficient for optically thick, weakly absorbing media (e.g., snow and desert surfaces at visible wavelengths since a photon may undergo many internal scattering events before it exists the medium or is absorbed. However, undulated surfaces can often be represented as collections of locally flat tilted facets characterized by the BRF found from the traditional plane parallel RTE. In this way the MOnte Carlo procedure could be used only to evaluate the effects of surface shadowing and multiple surface reflections, thereby bypassing the time-consuming ray tracing inside the medium and providing a great savings of CPU time.
Mariella, Jr., Raymond P.
2018-03-06
An isotachophoresis system for separating a sample containing particles into discrete packets including a flow channel, the flow channel having a large diameter section and a small diameter section; a negative electrode operably connected to the flow channel; a positive electrode operably connected to the flow channel; a leading carrier fluid in the flow channel; a trailing carrier fluid in the flow channel; and a control for separating the particles in the sample into discrete packets using the leading carrier fluid, the trailing carrier fluid, the large diameter section, and the small diameter section.
Batra, Saurabh; Cakmak, Miko
2015-12-28
In this study, the chaining and preferential alignment of barium titanate nanoparticles (100 nm) through the thickness direction of a polymer matrix in the presence of an electric field is shown. Application of an AC electric field in a well-dispersed solution leads to the formation of chains of nanoparticles in discrete rows oriented with their primary axis in the E-field direction due to dielectrophoresis. The change in the orientation of these chains was quantified through statistical analysis of SEM images and was found to be dependent on E-field, frequency and viscosity. When a DC field is applied a distinct layer consisting of dense particles was observed with micro-computed tomography. These studies show that the increase in DC voltage leads to increase in the thickness of the particle rich layer along with the packing density also increasing. Increasing the mutual interactions between particles due to the formation of particle chains in the "Z"-direction decreases the critical percolation concentration above which substantial enhancement of properties occurs. This manufacturing method therefore shows promise to lower the cost of the products for a range of applications including capacitors by either enhancing the dielectric properties for a given concentration or reduces the concentration of nanoparticles needed for a given property.
NASA Astrophysics Data System (ADS)
Bazilevs, Y.; Moutsanidis, G.; Bueno, J.; Kamran, K.; Kamensky, D.; Hillman, M. C.; Gomez, H.; Chen, J. S.
2017-07-01
In this two-part paper we begin the development of a new class of methods for modeling fluid-structure interaction (FSI) phenomena for air blast. We aim to develop accurate, robust, and practical computational methodology, which is capable of modeling the dynamics of air blast coupled with the structure response, where the latter involves large, inelastic deformations and disintegration into fragments. An immersed approach is adopted, which leads to an a-priori monolithic FSI formulation with intrinsic contact detection between solid objects, and without formal restrictions on the solid motions. In Part I of this paper, the core air-blast FSI methodology suitable for a variety of discretizations is presented and tested using standard finite elements. Part II of this paper focuses on a particular instantiation of the proposed framework, which couples isogeometric analysis (IGA) based on non-uniform rational B-splines and a reproducing-kernel particle method (RKPM), which is a meshfree technique. The combination of IGA and RKPM is felt to be particularly attractive for the problem class of interest due to the higher-order accuracy and smoothness of both discretizations, and relative simplicity of RKPM in handling fragmentation scenarios. A collection of mostly 2D numerical examples is presented in each of the parts to illustrate the good performance of the proposed air-blast FSI framework.
Numerical integration techniques for curved-element discretizations of molecule-solvent interfaces.
Bardhan, Jaydeep P; Altman, Michael D; Willis, David J; Lippow, Shaun M; Tidor, Bruce; White, Jacob K
2007-07-07
Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, here methods were developed to model several important surface formulations using exact surface discretizations. Following and refining Zauhar's work [J. Comput.-Aided Mol. Des. 9, 149 (1995)], two classes of curved elements were defined that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. Numerical integration techniques are presented that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, a set of calculations are presented that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planar-triangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute-solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that the methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online as supplemental material.
Constructing Contracts: Making Discrete Mathematics Relevant to Beginning Programmers
ERIC Educational Resources Information Center
Gegg-Harrison, Timothy S.
2005-01-01
Although computer scientists understand the importance of discrete mathematics to the foundations of their field, computer science (CS) students do not always see the relevance. Thus, it is important to find a way to show students its relevance. The concept of program correctness is generally taught as an activity independent of the programming…
NASA Technical Reports Server (NTRS)
Deffenbaugh, F. D.; Vitz, J. F.
1979-01-01
The users manual for the Discrete Vortex Cross flow Evaluator (DIVORCE) computer program is presented. DIVORCE was developed in FORTRAN 4 for the DCD 6600 and CDC 7600 machines. Optimal calls to a NASA vector subroutine package are provided for use with the CDC 7600.
Li, Tongqing; Peng, Yuxing; Zhu, Zhencai; Zou, Shengyong; Yin, Zixin
2017-05-11
Aiming at predicting what happens in reality inside mills, the contact parameters of iron ore particles for discrete element method (DEM) simulations should be determined accurately. To allow the irregular shape to be accurately determined, the sphere clump method was employed in modelling the particle shape. The inter-particle contact parameters were systematically altered whilst the contact parameters between the particle and wall were arbitrarily assumed, in order to purely assess its impact on the angle of repose for the mono-sized iron ore particles. Results show that varying the restitution coefficient over the range considered does not lead to any obvious difference in the angle of repose, but the angle of repose has strong sensitivity to the rolling/static friction coefficient. The impacts of the rolling/static friction coefficient on the angle of repose are interrelated, and increasing the inter-particle rolling/static friction coefficient can evidently increase the angle of repose. However, the impact of the static friction coefficient is more profound than that of the rolling friction coefficient. Finally, a predictive equation is established and a very close agreement between the predicted and simulated angle of repose is attained. This predictive equation can enormously shorten the inter-particle contact parameters calibration time that can help in the implementation of DEM simulations.
Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.
Tao, Liang; Kwan, Hon Keung
2012-07-01
Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cacuci, Dan G.; Favorite, Jeffrey A.
This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less
Cacuci, Dan G.; Favorite, Jeffrey A.
2018-04-06
This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less
Direct modeling for computational fluid dynamics
NASA Astrophysics Data System (ADS)
Xu, Kun
2015-06-01
All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct construction of discrete numerical evolution equations, where the mesh size and time step will play dynamic roles in the modeling process. With the variation of the ratio between mesh size and local particle mean free path, the scheme will capture flow physics from the kinetic particle transport and collision to the hydrodynamic wave propagation. Based on the direct modeling, a continuous dynamics of flow motion will be captured in the unified gas-kinetic scheme. This scheme can be faithfully used to study the unexplored non-equilibrium flow physics in the transition regime.
Residue-Specific Side-Chain Polymorphisms via Particle Belief Propagation.
Ghoraie, Laleh Soltan; Burkowski, Forbes; Li, Shuai Cheng; Zhu, Mu
2014-01-01
Protein side chains populate diverse conformational ensembles in crystals. Despite much evidence that there is widespread conformational polymorphism in protein side chains, most of the X-ray crystallography data are modeled by single conformations in the Protein Data Bank. The ability to extract or to predict these conformational polymorphisms is of crucial importance, as it facilitates deeper understanding of protein dynamics and functionality. In this paper, we describe a computational strategy capable of predicting side-chain polymorphisms. Our approach extends a particular class of algorithms for side-chain prediction by modeling the side-chain dihedral angles more appropriately as continuous rather than discrete variables. Employing a new inferential technique known as particle belief propagation, we predict residue-specific distributions that encode information about side-chain polymorphisms. Our predicted polymorphisms are in relatively close agreement with results from a state-of-the-art approach based on X-ray crystallography data, which characterizes the conformational polymorphisms of side chains using electron density information, and has successfully discovered previously unmodeled conformations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackowski, Daniel W.; Mishchenko, Michael I.
The conventional orientation-averaging procedure developed in the framework of the superposition T-matrix approach is generalized to include the case of illumination by a Gaussian beam (GB). The resulting computer code is parallelized and used to perform extensive numerically exact calculations of electromagnetic scattering by volumes of discrete random medium consisting of monodisperse spherical particles. The size parameters of the scattering volumes are 40, 50, and 60, while their packing density is fixed at 5%. We demonstrate that all scattering patterns observed in the far-field zone of a random multisphere target and their evolution with decreasing width of the incident GBmore » can be interpreted in terms of idealized theoretical concepts such as forward-scattering interference, coherent backscattering (CB), and diffuse multiple scattering. It is shown that the increasing violation of electromagnetic reciprocity with decreasing GB width suppresses and eventually eradicates all observable manifestations of CB. This result supplements the previous demonstration of the effects of broken reciprocity in the case of magneto-optically active particles subjected to an external magnetic field.« less
Millimeter wave radiative transfer studies for precipitation measurements
NASA Technical Reports Server (NTRS)
Vivekanandan, J.; Evans, Frank
1989-01-01
Scattering calculations using the discrete dipole approximation and vector radiative transfer calculations were performed to model multiparameter radar return and passive microwave emission for a simple model of a winter storm. The issue of dendrite riming was addressed by computing scattering properties of thin ice disks with varying bulk density. It was shown that C-band multiparameter radar contains information about particle density and the number concentration of the ice particles. The radiative transfer modeling indicated that polarized multifrequency passive microwave emission may be used to infer some properties of ice hydrometers. Detailed radar modeling and vector radiative transfer modeling is in progress to enhance the understanding of simultaneous radar and radiometer measurements, as in the case of the proposed TRMM field program. A one-dimensional cloud model will be used to simulate the storm structure in detail and study the microphysics, such as size and density. Multifrequency polarized radiometer measurements from the SSMI satellite instrument will be analyzed in relation to dual-frequency and dual-polarization radar measurements.
MCNP capabilities for nuclear well logging calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. This paper discusses how the general-purpose continuous-energy Monte Carlo code MCNP ({und M}onte {und C}arlo {und n}eutron {und p}hoton), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tallymore » characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data.« less
Micromechanical Aspects of Hydraulic Fracturing Processes
NASA Astrophysics Data System (ADS)
Galindo-torres, S. A.; Behraftar, S.; Scheuermann, A.; Li, L.; Williams, D.
2014-12-01
A micromechanical model is developed to simulate the hydraulic fracturing process. The model comprises two key components. Firstly, the solid matrix, assumed as a rock mass with pre-fabricated cracks, is represented by an array of bonded particles simulated by the Discrete Element Model (DEM)[1]. The interaction is ruled by the spheropolyhedra method, which was introduced by the authors previously and has been shown to realistically represent many of the features found in fracturing and communition processes. The second component is the fluid, which is modelled by the Lattice Boltzmann Method (LBM). It was recently coupled with the spheropolyhedra by the authors and validated. An advantage of this coupled LBM-DEM model is the control of many of the parameters of the fracturing fluid, such as its viscosity and the injection rate. To the best of the authors' knowledge this is the first application of such a coupled scheme for studying hydraulic fracturing[2]. In this first implementation, results are presented for a two-dimensional situation. Fig. 1 shows one snapshot of the LBM-DEM coupled simulation for the hydraulic fracturing where the elements with broken bonds can be identified and the fracture geometry quantified. The simulation involves a variation of the underground stress, particularly the difference between the two principal components of the stress tensor, to explore the effect on the fracture path. A second study focuses on the fluid viscosity to examine the effect of the time scales of different injection plans on the fracture geometry. The developed tool and the presented results have important implications for future studies of the hydraulic fracturing process and technology. references 1. Galindo-Torres, S.A., et al., Breaking processes in three-dimensional bonded granular materials with general shapes. Computer Physics Communications, 2012. 183(2): p. 266-277. 2. Galindo-Torres, S.A., A coupled Discrete Element Lattice Boltzmann Method for the simulation of fluid-solid interaction with particles of general shapes. Computer Methods in Applied Mechanics and Engineering, 2013. 265(0): p. 107-119.
Kroupa, Martin; Vonka, Michal; Soos, Miroslav; Kosek, Juraj
2015-07-21
The coagulation process has a dramatic impact on the properties of dispersions of colloidal particles including the change of optical, rheological, as well as texture properties. We model the behavior of a colloidal dispersion with moderate particle volume fraction, that is, 5 wt %, subjected to high shear rates employing the time-dependent Discrete Element Method (DEM) in three spatial dimensions. The Derjaguin-Landau-Verwey-Overbeek (DLVO) theory was used to model noncontact interparticle interactions, while contact mechanics was described by the Johnson-Kendall-Roberts (JKR) theory of adhesion. The obtained results demonstrate that the steady-state size of the produced clusters is a strong function of the applied shear rate, primary particle size, and the surface energy of the particles. Furthermore, it was found that the cluster size is determined by the maximum adhesion force between the primary particles and not the adhesion energy. This observation is in agreement with several simulation studies and is valid for the case when the particle-particle contact is elastic and no plastic deformation occurs. These results are of major importance, especially for the emulsion polymerization process, during which the fouling of reactors and piping causes significant financial losses.
Discrete bivariate population balance modelling of heteroaggregation processes.
Rollié, Sascha; Briesen, Heiko; Sundmacher, Kai
2009-08-15
Heteroaggregation in binary particle mixtures was simulated with a discrete population balance model in terms of two internal coordinates describing the particle properties. The considered particle species are of different size and zeta-potential. Property space is reduced with a semi-heuristic approach to enable an efficient solution. Aggregation rates are based on deterministic models for Brownian motion and stability, under consideration of DLVO interaction potentials. A charge-balance kernel is presented, relating the electrostatic surface potential to the property space by a simple charge balance. Parameter sensitivity with respect to the fractal dimension, aggregate size, hydrodynamic correction, ionic strength and absolute particle concentration was assessed. Results were compared to simulations with the literature kernel based on geometric coverage effects for clusters with heterogeneous surface properties. In both cases electrostatic phenomena, which dominate the aggregation process, show identical trends: impeded cluster-cluster aggregation at low particle mixing ratio (1:1), restabilisation at high mixing ratios (100:1) and formation of complex clusters for intermediate ratios (10:1). The particle mixing ratio controls the surface coverage extent of the larger particle species. Simulation results are compared to experimental flow cytometric data and show very satisfactory agreement.
Calibration of discrete element model parameters: soybeans
NASA Astrophysics Data System (ADS)
Ghodki, Bhupendra M.; Patel, Manish; Namdeo, Rohit; Carpenter, Gopal
2018-05-01
Discrete element method (DEM) simulations are broadly used to get an insight of flow characteristics of granular materials in complex particulate systems. DEM input parameters for a model are the critical prerequisite for an efficient simulation. Thus, the present investigation aims to determine DEM input parameters for Hertz-Mindlin model using soybeans as a granular material. To achieve this aim, widely acceptable calibration approach was used having standard box-type apparatus. Further, qualitative and quantitative findings such as particle profile, height of kernels retaining the acrylic wall, and angle of repose of experiments and numerical simulations were compared to get the parameters. The calibrated set of DEM input parameters includes the following (a) material properties: particle geometric mean diameter (6.24 mm); spherical shape; particle density (1220 kg m^{-3} ), and (b) interaction parameters such as particle-particle: coefficient of restitution (0.17); coefficient of static friction (0.26); coefficient of rolling friction (0.08), and particle-wall: coefficient of restitution (0.35); coefficient of static friction (0.30); coefficient of rolling friction (0.08). The results may adequately be used to simulate particle scale mechanics (grain commingling, flow/motion, forces, etc) of soybeans in post-harvest machinery and devices.
ERIC Educational Resources Information Center
Rosenstein, Joseph G., Ed.; Franzblau, Deborah S., Ed.; Roberts, Fred S., Ed.
This book is a collection of articles by experienced educators and explains why and how discrete mathematics should be taught in K-12 classrooms. It includes evidence for "why" and practical guidance for "how" and also discusses how discrete mathematics can be used as a vehicle for achieving the broader goals of the major…
Network Science Research Laboratory (NSRL) Discrete Event Toolkit
2016-01-01
ARL-TR-7579 ● JAN 2016 US Army Research Laboratory Network Science Research Laboratory (NSRL) Discrete Event Toolkit by...Laboratory (NSRL) Discrete Event Toolkit by Theron Trout and Andrew J Toth Computational and Information Sciences Directorate, ARL...Research Laboratory (NSRL) Discrete Event Toolkit 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Theron Trout
NASA Astrophysics Data System (ADS)
Noyes, H. Pierre; Starson, Scott
1991-03-01
Discrete physics, because it replaces time evolution generated by the energy operator with a global bit-string generator (program universe) and replaces fields with the relativistic Wheeler-Feynman action at a distance, allows the consistent formulation of the concept of signed gravitational charge for massive particles. The resulting prediction made by this version of the theory is that free anti-particles near the surface of the earth will fall up with the same acceleration that the corresponding particles fall down. So far as we can see, no current experimental information is in conflict with this prediction of our theory. The experiment crusis will be one of the anti-proton or anti-hydrogen experiments at CERN. Our prediction should be much easier to test than the small effects which those experiments are currently designed to detect or bound.
Natural electroweak breaking from a mirror symmetry.
Chacko, Z; Goh, Hock-Seng; Harnik, Roni
2006-06-16
We present "twin Higgs models," simple realizations of the Higgs boson as a pseudo Goldstone boson that protect the weak scale from radiative corrections up to scales of order 5-10 TeV. In the ultraviolet these theories have a discrete symmetry which interchanges each standard model particle with a corresponding particle which transforms under a twin or a mirror standard model gauge group. In addition, the Higgs sector respects an approximate global symmetry. When this global symmetry is broken, the discrete symmetry tightly constrains the form of corrections to the pseudo Goldstone Higgs potential, allowing natural electroweak symmetry breaking. Precision electroweak constraints are satisfied by construction. These models demonstrate that, contrary to the conventional wisdom, stabilizing the weak scale does not require new light particles charged under the standard model gauge groups.
Gibbsian Stationary Non-equilibrium States
NASA Astrophysics Data System (ADS)
De Carlo, Leonardo; Gabrielli, Davide
2017-09-01
We study the structure of stationary non-equilibrium states for interacting particle systems from a microscopic viewpoint. In particular we discuss two different discrete geometric constructions. We apply both of them to determine non reversible transition rates corresponding to a fixed invariant measure. The first one uses the equivalence of this problem with the construction of divergence free flows on the transition graph. Since divergence free flows are characterized by cyclic decompositions we can generate families of models from elementary cycles on the configuration space. The second construction is a functional discrete Hodge decomposition for translational covariant discrete vector fields. According to this, for example, the instantaneous current of any interacting particle system on a finite torus can be canonically decomposed in a gradient part, a circulation term and an harmonic component. All the three components are associated with functions on the configuration space. This decomposition is unique and constructive. The stationary condition can be interpreted as an orthogonality condition with respect to an harmonic discrete vector field and we use this decomposition to construct models having a fixed invariant measure.
Comparative analysis of two discretizations of Ricci curvature for complex networks.
Samal, Areejit; Sreejith, R P; Gu, Jiao; Liu, Shiping; Saucan, Emil; Jost, Jürgen
2018-06-05
We have performed an empirical comparison of two distinct notions of discrete Ricci curvature for graphs or networks, namely, the Forman-Ricci curvature and Ollivier-Ricci curvature. Importantly, these two discretizations of the Ricci curvature were developed based on different properties of the classical smooth notion, and thus, the two notions shed light on different aspects of network structure and behavior. Nevertheless, our extensive computational analysis in a wide range of both model and real-world networks shows that the two discretizations of Ricci curvature are highly correlated in many networks. Moreover, we show that if one considers the augmented Forman-Ricci curvature which also accounts for the two-dimensional simplicial complexes arising in graphs, the observed correlation between the two discretizations is even higher, especially, in real networks. Besides the potential theoretical implications of these observations, the close relationship between the two discretizations has practical implications whereby Forman-Ricci curvature can be employed in place of Ollivier-Ricci curvature for faster computation in larger real-world networks whenever coarse analysis suffices.
NASA Astrophysics Data System (ADS)
Rose, D. V.; Welch, D. R.; Clark, R. E.; Thoma, C.; Zimmerman, W. R.; Bruner, N.; Rambo, P. K.; Atherton, B. W.
2011-09-01
Streamer and leader formation in high pressure devices is dynamic process involving a broad range of physical phenomena. These include elastic and inelastic particle collisions in the gas, radiation generation, transport and absorption, and electrode interactions. Accurate modeling of these physical processes is essential for a number of applications, including high-current, laser-triggered gas switches. Towards this end, we present a new 3D implicit particle-in-cell simulation model of gas breakdown leading to streamer formation in electronegative gases. The model uses a Monte Carlo treatment for all particle interactions and includes discrete photon generation, transport, and absorption for ultra-violet and soft x-ray radiation. Central to the realization of this fully kinetic particle treatment is an algorithm that manages the total particle count by species while preserving the local momentum distribution functions and conserving charge [D. R. Welch, T. C. Genoni, R. E. Clark, and D. V. Rose, J. Comput. Phys. 227, 143 (2007)]. The simulation model is fully electromagnetic, making it capable of following, for example, the evolution of a gas switch from the point of laser-induced localized breakdown of the gas between electrodes through the successive stages of streamer propagation, initial electrode current connection, and high-current conduction channel evolution, where self-magnetic field effects are likely to be important. We describe the model details and underlying assumptions used and present sample results from 3D simulations of streamer formation and propagation in SF6.
Multiprocessing MCNP on an IBM RS/6000 cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, G.W.; West, J.T.
1993-01-01
The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl's Law S ((f,P) = 1 f + f/P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl's Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.« less
NASA Astrophysics Data System (ADS)
Sun, Rui; Xiao, Heng
2016-04-01
With the growth of available computational resource, CFD-DEM (computational fluid dynamics-discrete element method) becomes an increasingly promising and feasible approach for the study of sediment transport. Several existing CFD-DEM solvers are applied in chemical engineering and mining industry. However, a robust CFD-DEM solver for the simulation of sediment transport is still desirable. In this work, the development of a three-dimensional, massively parallel, and open-source CFD-DEM solver SediFoam is detailed. This solver is built based on open-source solvers OpenFOAM and LAMMPS. OpenFOAM is a CFD toolbox that can perform three-dimensional fluid flow simulations on unstructured meshes; LAMMPS is a massively parallel DEM solver for molecular dynamics. Several validation tests of SediFoam are performed using cases of a wide range of complexities. The results obtained in the present simulations are consistent with those in the literature, which demonstrates the capability of SediFoam for sediment transport applications. In addition to the validation test, the parallel efficiency of SediFoam is studied to test the performance of the code for large-scale and complex simulations. The parallel efficiency tests show that the scalability of SediFoam is satisfactory in the simulations using up to O(107) particles.
Identifying Wave-Particle Interactions in the Solar Wind using Statistical Correlations
NASA Astrophysics Data System (ADS)
Broiles, T. W.; Jian, L. K.; Gary, S. P.; Lepri, S. T.; Stevens, M. L.
2017-12-01
Heavy ions are a trace component of the solar wind, which can resonate with plasma waves, causing heating and acceleration relative to the bulk plasma. While wave-particle interactions are generally accepted as the cause of heavy ion heating and acceleration, observations to constrain the physics are lacking. In this work, we statistically link specific wave modes to heavy ion heating and acceleration. We have computed the Fast Fourier Transform (FFT) of transverse and compressional magnetic waves between 0 and 5.5 Hz using 9 days of ACE and Wind Magnetometer data. The FFTs are averaged over plasma measurement cycles to compute statistical correlations between magnetic wave power at each discrete frequency, and ion kinetic properties measured by ACE/SWICS and Wind/SWE. The results show that lower frequency transverse oscillations (< 0.2 Hz) and higher frequency compressional oscillations (> 0.4 Hz) are positively correlated with enhancements in the heavy ion thermal and drift speeds. Moreover, the correlation results for the He2+ and O6+ were similar on most days. The correlations were often weak, but most days had some frequencies that correlated with statistical significance. This work suggests that the solar wind heavy ions are possibly being heated and accelerated by both transverse and compressional waves at different frequencies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bisio, Alessandro; D’Ariano, Giacomo Mauro; Tosini, Alessandro, E-mail: alessandro.tosini@unipv.it
We present a quantum cellular automaton model in one space-dimension which has the Dirac equation as emergent. This model, a discrete-time and causal unitary evolution of a lattice of quantum systems, is derived from the assumptions of homogeneity, parity and time-reversal invariance. The comparison between the automaton and the Dirac evolutions is rigorously set as a discrimination problem between unitary channels. We derive an exact lower bound for the probability of error in the discrimination as an explicit function of the mass, the number and the momentum of the particles, and the duration of the evolution. Computing this bound withmore » experimentally achievable values, we see that in that regime the QCA model cannot be discriminated from the usual Dirac evolution. Finally, we show that the evolution of one-particle states with narrow-band in momentum can be efficiently simulated by a dispersive differential equation for any regime. This analysis allows for a comparison with the dynamics of wave-packets as it is described by the usual Dirac equation. This paper is a first step in exploring the idea that quantum field theory could be grounded on a more fundamental quantum cellular automaton model and that physical dynamics could emerge from quantum information processing. In this framework, the discretization is a central ingredient and not only a tool for performing non-perturbative calculation as in lattice gauge theory. The automaton model, endowed with a precise notion of local observables and a full probabilistic interpretation, could lead to a coherent unification of a hypothetical discrete Planck scale with the usual Fermi scale of high-energy physics. - Highlights: • The free Dirac field in one space dimension as a quantum cellular automaton. • Large scale limit of the automaton and the emergence of the Dirac equation. • Dispersive differential equation for the evolution of smooth states on the automaton. • Optimal discrimination between the automaton evolution and the Dirac equation.« less
Comparison Between 2D and 3D Simulations of Rate Dependent Friction Using DEM
NASA Astrophysics Data System (ADS)
Wang, C.; Elsworth, D.
2017-12-01
Rate-state dependent constitutive laws of frictional evolution have been successful in representing many of the first- and second- order components of earthquake rupture. Although this constitutive law has been successfully applied in numerical models, difficulty remains in efficient implementation of this constitutive law in computationally-expensive granular mechanics simulations using discrete element methods (DEM). This study introduces a novel approach in implementing a rate-dependent constitutive relation of contact friction into DEM. This is essentially an implementation of a slip-weakening constitutive law onto local particle contacts without sacrificing computational efficiency. This implementation allows the analysis of slip stability of simulated fault gouge materials. Velocity-stepping experiments are reported on both uniform and textured distributions of quartz and talc as 3D analogs of gouge mixtures. Distinct local slip stability parameters (a-b) are assigned to the quartz and talc, respectively. We separately vary talc content from 0 to 100% in the uniform mixtures and talc layer thickness from 1 to 20 particles in the textured mixtures. Applied shear displacements are cycled through velocities of 1μm/s and 10μm/s. Frictional evolution data are collected and compared to 2D simulation results. We show that dimensionality significantly impacts the evolution of friction. 3D simulation results are more representative of laboratory observed behavior and numerical noise is shown at a magnitude of 0.01 in terms of friction coefficient. Stability parameters (a-b) can be straightforwardly obtained from analyzing velocity steps, and are different from locally assigned (a-b) values. Sensitivity studies on normal stress, shear velocity, particle size, local (a-b) values, and characteristic slip distance (Dc) show that the implementation is sensitive to local (a-b) values and relations between (Dc) and particle size.
Multiple Scattering in Planetary Regoliths Using Incoherent Interactions
NASA Astrophysics Data System (ADS)
Muinonen, K.; Markkanen, J.; Vaisanen, T.; Penttilä, A.
2017-12-01
We consider scattering of light by a planetary regolith using novel numerical methods for discrete random media of particles. Understanding the scattering process is of key importance for spectroscopic, photometric, and polarimetric modeling of airless planetary objects, including radar studies. In our modeling, the size of the spherical random medium can range from microscopic to macroscopic sizes, whereas the particles are assumed to be of the order of the wavelength in size. We extend the radiative transfer and coherent backscattering method (RT-CB) to the case of dense packing of particles by adopting the ensemble-averaged first-order incoherent extinction, scattering, and absorption characteristics of a volume element of particles as input. In the radiative transfer part, at each absorption and scattering process, we account for absorption with the help of the single-scattering albedo and peel off the Stokes parameters of radiation emerging from the medium in predefined scattering angles. We then generate a new scattering direction using the joint probability density for the local polar and azimuthal scattering angles. In the coherent backscattering part, we utilize amplitude scattering matrices along the radiative-transfer path and the reciprocal path. Furthermore, we replace the far-field interactions of the RT-CB method with rigorous interactions facilitated by the Superposition T-matrix method (STMM). This gives rise to a new RT-RT method, radiative transfer with reciprocal interactions. For microscopic random media, we then compare the new results to asymptotically exact results computed using the STMM, succeeding in the numerical validation of the new methods.Acknowledgments. Research supported by European Research Council with Advanced Grant No. 320773 SAEMPL, Scattering and Absorption of ElectroMagnetic waves in ParticuLate media. Computational resources provided by CSC - IT Centre for Science Ltd, Finland.
de Gabory, Ludovic; Reville, Nicolas; Baux, Yannick; Boisson, Nicolas; Bordenave, Laurence
2018-01-16
Computational fluid dynamic (CFD) simulations have greatly improved the understanding of nasal physiology. We postulate that simulating the entire and repeated respiratory nasal cycles, within the whole sinonasal cavities, is mandatory to gather more accurate observations and better understand airflow patterns. A 3-dimensional (3D) sinonasal model was constructed from a healthy adult computed tomography (CT) scan which discretized in 6.6 million cells (mean volume, 0.008 mm 3 ). CFD simulations were performed with ANSYS©FluentTMv16.0.0 software with transient and turbulent airflow (k-ω model). Two respiratory cycles (8 seconds) were simulated to assess pressure, velocity, wall shear stress, and particle residence time. The pressure gradients within the sinus cavities varied according to their place of connection to the main passage. Alternations in pressure gradients induced a slight pumping phenomenon close to the ostia but no movement of air was observed within the sinus cavities. Strong movements were observed within the inferior meatus during expiration contrary to the inspiration, as in the olfactory cleft at the same time. Particle residence time was longer during expiration than inspiration due to nasal valve resistance, as if the expiratory phase was preparing the next inspiratory phase. Throughout expiration, some particles remained in contact with the lower turbinates. The posterior part of the olfactory cleft was gradually filled with particles that did not leave the nose at the next respiratory cycle. This pattern increased as the respiratory cycle was repeated. CFD is more efficient and reliable when the entire respiratory cycle is simulated and repeated to avoid losing information. © 2018 ARS-AAOA, LLC.
Increasing of horizontal velocity of particles leaving a belt conveyor
NASA Astrophysics Data System (ADS)
Tavares, Abraão; Faria, Allbens
2017-06-01
We investigate the transport of granular materials by a conveyor belt via numerical simulations. We report an unusual increasing of particles horizontal velocity when they leave the belt and initiate free-fall. Using Discrete Elements Method, the mechanism underlying this phenomenon were investigated, and a study on how particle and system properties influences this effect were conducted.
Particle Diffusion in an Inhomogeneous Medium
ERIC Educational Resources Information Center
Bringuier, E.
2011-01-01
This paper is an elementary introduction to particle diffusion in a medium where the coefficient of diffusion varies with position. The introduction is aimed at third-year university courses. We start from a simple model of particles hopping on a discrete lattice, in one or more dimensions, and then take the continuous-space limit so as to obtain…
Misleading inferences from discretization of empty spacetime: Snyder-noncommutativity case study
NASA Astrophysics Data System (ADS)
Amelino-Camelia, Giovanni; Astuti, Valerio
2015-06-01
Alternative approaches to the study of the quantum gravity problem are handling the role of spacetime very differently. Some are focusing on the analysis of one or another novel formulation of "empty spacetime", postponing to later stages the introduction of particles and fields, while other approaches assume that spacetime should only be an emergent entity. We here argue that recent progress in the covariant formulation of quantum mechanics, suggests that empty spacetime is not physically meaningful. We illustrate our general thesis in the specific context of the noncommutative Snyder spacetime, which is also of some intrinsic interest, since hundreds of studies were devoted to its analysis. We show that empty Snyder spacetime, described in terms of a suitable kinematical Hilbert space, is discrete, but this is only a formal artifact: the discreteness leaves no trace on the observable properties of particles on the physical Hilbert space.
Coupled large eddy simulation and discrete element model of bedload motion
NASA Astrophysics Data System (ADS)
Furbish, D.; Schmeeckle, M. W.
2011-12-01
We combine a three-dimensional large eddy simulation of turbulence to a three-dimensional discrete element model of turbulence. The large eddy simulation of the turbulent fluid is extended into the bed composed of non-moving particles by adding resistance terms to the Navier-Stokes equations in accordance with the Darcy-Forchheimer law. This allows the turbulent velocity and pressure fluctuations to penetrate the bed of discrete particles, and this addition of a porous zone results in turbulence structures above the bed that are similar to previous experimental and numerical results for hydraulically-rough beds. For example, we reproduce low-speed streaks that are less coherent than those over smooth-beds due to the episodic outflow of fluid from the bed. Local resistance terms are also added to the Navier-Stokes equations to account for the drag of individual moving particles. The interaction of the spherical particles utilizes a standard DEM soft-sphere Hertz model. We use only a simple drag model to calculate the fluid forces on the particles. The model reproduces an exponential distribution of bedload particle velocities that we have found experimentally using high-speed video of a flat bed of moving sand in a recirculating water flume. The exponential distribution of velocity results from the motion of many particles that are nearly constantly in contact with other bed particles and come to rest after short distances, in combination with a relatively few particles that are entrained further above the bed and have velocities approaching that of the fluid. Entrainment and motion "hot spots" are evident that are not perfectly correlated with the local, instantaneous fluid velocity. Zones of the bed that have recently experienced motion are more susceptible to motion because of the local configuration of particle contacts. The paradigm of a characteristic saltation hop length in riverine bedload transport has infused many aspects of geomorphic thought, including even bedrock erosion. In light of our theoretical, experimental, and numerical findings supporting the exponential distribution of bedload particle motion, the idea of a characteristic saltation hop should be scrapped or substantially modified.
Metriplectic integrators for the Landau collision operator
Kraus, Michael; Hirvijoki, Eero
2017-10-02
Here, we present a novel framework for addressing the nonlinear Landau collision integral in terms of finite element and other subspace projection methods. We employ the underlying metriplectic structure of the Landau collision integral and, using a Galerkin discretization for the velocity space, we transform the infinite-dimensional system into a finite-dimensional, time-continuous metriplectic system. Temporal discretization is accomplished using the concept of discrete gradients. The conservation of energy, momentum, and particle densities, as well as the production of entropy is demonstrated algebraically for the fully discrete system. Due to the generality of our approach, the conservation properties and the monotonicmore » behavior of entropy are guaranteed for finite element discretizations, in general, independently of the mesh configuration.« less
Discretization vs. Rounding Error in Euler's Method
ERIC Educational Resources Information Center
Borges, Carlos F.
2011-01-01
Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…
ADAM: Analysis of Discrete Models of Biological Systems Using Computer Algebra
2011-01-01
Background Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. Results We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Conclusions Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics. PMID:21774817
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strauss, M.; Amendt, P.A.; London, R.A.
1997-03-04
Objective is to study retinal injury by subnanosecond laser pulses absorbed in the retinal pigment epithelium (RPE) cells. The absorption centers in the RPE cell are melanosomes of order 1 {mu}m radius. Each melanosome includes many melanin particles of 10-15 nm radius, which are the local absorbers of the laser light and generate a discrete structure of hot spots. This work use the hydrodynamic code LATIS (LAser-TISsue interaction modeling) and a water equation of state to first simulate the small melanin particle of 15 nm responsible for initiating the hot spot and the pressure field. A average melanosome of 1more » {mu}m scale is next simulated. Supersonic shocks and fast vapor bubbles are generated in both cases: the melanin scale and the melanosome scale. The hot spot induces a shock wave pressure than with a uniform deposition of laser energy. It is found that an absorption coefficient of 6000 -8000 cm{sup -1} can explain the enhanced shock wave emitted by the melanosome. An experimental and theoretical effort should be considered to identify the mechanism for generating shock wave enhancement.« less
NASA Astrophysics Data System (ADS)
Petit, H. A.; Irassar, E. F.; Barbosa, M. R.
2018-01-01
Manufactured sands are particulate materials obtained as by product of rock crushing. Particle sizes in the sand can be as high as 6 mm and as low as a few microns. The concrete industry has been increasingly using these sands as fine aggregates to replace natural sands. The main shortcoming is the excess of particles smaller than <0.075 mm (Dust). This problem has been traditionally solved by a washing process. Air classification is being studied to replace the washing process and avoid the use of water. The complex classification process can only been understood with the aid of CFD-DEM simulations. This paper evaluates the applicability of a cross-flow air classifier to reduce the amount of dust in manufactured sands. Computational fluid dynamics (CFD) and discrete element modelling (DEM) were used for the assessment. Results show that the correct classification set up improves the size distribution of the raw materials. The cross-flow air classification is found to be influenced by the particle size distribution and the turbulence inside the chamber. The classifier can be re-designed to work at low inlet velocities to produce manufactured sand for the concrete industry.
Yang, Ping; Kattawar, George W; Liou, Kuo-Nan; Lu, Jun Q
2004-08-10
Two grid configurations can be employed to implement the finite-difference time-domain (FDTD) technique in a Cartesian system. One configuration defines the electric and magnetic field components at the cell edges and cell-face centers, respectively, whereas the other reverses these definitions. These two grid configurations differ in terms of implication on the electromagnetic boundary conditions if the scatterer in the FDTD computation is a dielectric particle. The permittivity has an abrupt transition at the cell interface if the dielectric properties of two adjacent cells are not identical. Similarly, the discontinuity of permittivity is also observed at the edges of neighboring cells that are different in terms of their dielectric constants. We present two FDTD schemes for light scattering by dielectric particles to overcome the above-mentioned discontinuity on the basis of the electromagnetic boundary conditions for the two Cartesian grid configurations. We also present an empirical approach to accelerate the convergence of the discrete Fourier transform to obtain the field values in the frequency domain. As a new application of the FDTD method, we investigate the scattering properties of multibranched bullet-rosette ice crystals at both visible and thermal infrared wavelengths.
Computational methods for diffusion-influenced biochemical reactions.
Dobrzynski, Maciej; Rodríguez, Jordi Vidal; Kaandorp, Jaap A; Blom, Joke G
2007-08-01
We compare stochastic computational methods accounting for space and discrete nature of reactants in biochemical systems. Implementations based on Brownian dynamics (BD) and the reaction-diffusion master equation are applied to a simplified gene expression model and to a signal transduction pathway in Escherichia coli. In the regime where the number of molecules is small and reactions are diffusion-limited predicted fluctuations in the product number vary between the methods, while the average is the same. Computational approaches at the level of the reaction-diffusion master equation compute the same fluctuations as the reference result obtained from the particle-based method if the size of the sub-volumes is comparable to the diameter of reactants. Using numerical simulations of reversible binding of a pair of molecules we argue that the disagreement in predicted fluctuations is due to different modeling of inter-arrival times between reaction events. Simulations for a more complex biological study show that the different approaches lead to different results due to modeling issues. Finally, we present the physical assumptions behind the mesoscopic models for the reaction-diffusion systems. Input files for the simulations and the source code of GMP can be found under the following address: http://www.cwi.nl/projects/sic/bioinformatics2007/
Quantum robots plus environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benioff, P.
1998-07-23
A quantum robot is a mobile quantum system, including an on board quantum computer and needed ancillary systems, that interacts with an environment of quantum systems. Quantum robots carry out tasks whose goals include making specified changes in the state of the environment or carrying out measurements on the environment. The environments considered so far, oracles, data bases, and quantum registers, are seen to be special cases of environments considered here. It is also seen that a quantum robot should include a quantum computer and cannot be simply a multistate head. A model of quantum robots and their interactions ismore » discussed in which each task, as a sequence of alternating computation and action phases,is described by a unitary single time step operator T {approx} T{sub a} + T{sub c} (discrete space and time are assumed). The overall system dynamics is described as a sum over paths of completed computation (T{sub c}) and action (T{sub a}) phases. A simple example of a task, measuring the distance between the quantum robot and a particle on a 1D lattice with quantum phase path dispersion present, is analyzed. A decision diagram for the task is presented and analyzed.« less
Augmenting regional and targeted delivery in the pulmonary acinus using magnetic particles
Ostrovski, Yan; Hofemeier, Philipp; Sznitman, Josué
2016-01-01
Background It has been hypothesized that by coupling magnetic particles to inhaled therapeutics, the ability to target specific lung regions (eg, only acinar deposition), or even more so specific points in the lung (eg, tumor targeting), can be substantially improved. Although this method has been proven feasible in seminal in vivo studies, there is still a wide gap in our basic understanding of the transport phenomena of magnetic particles in the pulmonary acinar regions of the lungs, including particle dynamics and deposition characteristics. Methods Here, we present computational fluid dynamics-discrete element method simulations of magnetically loaded microdroplet carriers in an anatomically inspired, space-filling, multi-generation acinar airway tree. Breathing motion is modeled by kinematic sinusoidal displacements of the acinar walls, during which droplets are inhaled and exhaled. Particle dynamics are governed by viscous drag, gravity, and Brownian motion as well as the external magnetic force. In particular, we examined the roles of droplet diameter and volume fraction of magnetic material within the droplets under two different breathing maneuvers. Results and discussion Our results indicate that by using magnetic-loaded droplets, 100% of the particles that enter are deposited in the acinar region. This is consistent across all particle sizes investigated (ie, 0.5–3.0 µm). This is best achieved through a deep inhalation maneuver combined with a breath-hold. Particles are found to penetrate deep into the acinus and disperse well, while the required amount of magnetic material is maintained low (<2.5%). Although particles in the size range of ~90–500 nm typically show the lowest deposition fractions, our results suggest that this feature could be leveraged to augment targeted delivery. PMID:27547034
Chou, Sheng-Kai; Jiau, Ming-Kai; Huang, Shih-Chia
2016-08-01
The growing ubiquity of vehicles has led to increased concerns about environmental issues. These concerns can be mitigated by implementing an effective carpool service. In an intelligent carpool system, an automated service process assists carpool participants in determining routes and matches. It is a discrete optimization problem that involves a system-wide condition as well as participants' expectations. In this paper, we solve the carpool service problem (CSP) to provide satisfactory ride matches. To this end, we developed a particle swarm carpool algorithm based on stochastic set-based particle swarm optimization (PSO). Our method introduces stochastic coding to augment traditional particles, and uses three terminologies to represent a particle: 1) particle position; 2) particle view; and 3) particle velocity. In this way, the set-based PSO (S-PSO) can be realized by local exploration. In the simulation and experiments, two kind of discrete PSOs-S-PSO and binary PSO (BPSO)-and a genetic algorithm (GA) are compared and examined using tested benchmarks that simulate a real-world metropolis. We observed that the S-PSO outperformed the BPSO and the GA thoroughly. Moreover, our method yielded the best result in a statistical test and successfully obtained numerical results for meeting the optimization objectives of the CSP.
Integrated simulation of continuous-scale and discrete-scale radiative transfer in metal foams
NASA Astrophysics Data System (ADS)
Xia, Xin-Lin; Li, Yang; Sun, Chuang; Ai, Qing; Tan, He-Ping
2018-06-01
A novel integrated simulation of radiative transfer in metal foams is presented. It integrates the continuous-scale simulation with the direct discrete-scale simulation in a single computational domain. It relies on the coupling of the real discrete-scale foam geometry with the equivalent continuous-scale medium through a specially defined scale-coupled zone. This zone holds continuous but nonhomogeneous volumetric radiative properties. The scale-coupled approach is compared to the traditional continuous-scale approach using volumetric radiative properties in the equivalent participating medium and to the direct discrete-scale approach employing the real 3D foam geometry obtained by computed tomography. All the analyses are based on geometrical optics. The Monte Carlo ray-tracing procedure is used for computations of the absorbed radiative fluxes and the apparent radiative behaviors of metal foams. The results obtained by the three approaches are in tenable agreement. The scale-coupled approach is fully validated in calculating the apparent radiative behaviors of metal foams composed of very absorbing to very reflective struts and that composed of very rough to very smooth struts. This new approach leads to a reduction in computational time by approximately one order of magnitude compared to the direct discrete-scale approach. Meanwhile, it can offer information on the local geometry-dependent feature and at the same time the equivalent feature in an integrated simulation. This new approach is promising to combine the advantages of the continuous-scale approach (rapid calculations) and direct discrete-scale approach (accurate prediction of local radiative quantities).
Wu, Hulin; Xue, Hongqi; Kumar, Arun
2012-06-01
Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.
2000-11-01
Discrete Math . 115, 141-152. [7] Edmonds J., Giles R. (1977) A Min-Max relation for submodular functions on graphs, Annals of Discrete Math . 1, 185...projective planes, handwritten man- uscript, published: (1990) Polyhedral Combinatorics (W. Cook, P.D. Seymour eds.), DIMACS Series in Discrete Math . and...Theoretical Computer Science 1, 101-105. [11] Lovasz L. (1972) Normal hypergraphs and the perfect graph conjecture, Discrete Math . 2, 253-267. [12
Numerical uncertainty in computational engineering and physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemez, Francois M
2009-01-01
Obtaining a solution that approximates ordinary or partial differential equations on a computational mesh or grid does not necessarily mean that the solution is accurate or even 'correct'. Unfortunately assessing the quality of discrete solutions by questioning the role played by spatial and temporal discretizations generally comes as a distant third to test-analysis comparison and model calibration. This publication is contributed to raise awareness of the fact that discrete solutions introduce numerical uncertainty. This uncertainty may, in some cases, overwhelm in complexity and magnitude other sources of uncertainty that include experimental variability, parametric uncertainty and modeling assumptions. The concepts ofmore » consistency, convergence and truncation error are overviewed to explain the articulation between the exact solution of continuous equations, the solution of modified equations and discrete solutions computed by a code. The current state-of-the-practice of code and solution verification activities is discussed. An example in the discipline of hydro-dynamics illustrates the significant effect that meshing can have on the quality of code predictions. A simple method is proposed to derive bounds of solution uncertainty in cases where the exact solution of the continuous equations, or its modified equations, is unknown. It is argued that numerical uncertainty originating from mesh discretization should always be quantified and accounted for in the overall uncertainty 'budget' that supports decision-making for applications in computational physics and engineering.« less
DISCRETE EVENT SIMULATION OF OPTICAL SWITCH MATRIX PERFORMANCE IN COMPUTER NETWORKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imam, Neena; Poole, Stephen W
2013-01-01
In this paper, we present application of a Discrete Event Simulator (DES) for performance modeling of optical switching devices in computer networks. Network simulators are valuable tools in situations where one cannot investigate the system directly. This situation may arise if the system under study does not exist yet or the cost of studying the system directly is prohibitive. Most available network simulators are based on the paradigm of discrete-event-based simulation. As computer networks become increasingly larger and more complex, sophisticated DES tool chains have become available for both commercial and academic research. Some well-known simulators are NS2, NS3, OPNET,more » and OMNEST. For this research, we have applied OMNEST for the purpose of simulating multi-wavelength performance of optical switch matrices in computer interconnection networks. Our results suggest that the application of DES to computer interconnection networks provides valuable insight in device performance and aids in topology and system optimization.« less
Geometric Nonlinear Computation of Thin Rods and Shells
NASA Astrophysics Data System (ADS)
Grinspun, Eitan
2011-03-01
We develop simple, fast numerical codes for the dynamics of thin elastic rods and shells, by exploiting the connection between physics, geometry, and computation. By building a discrete mechanical picture from the ground up, mimicking the axioms, structures, and symmetries of the smooth setting, we produce numerical codes that not only are consistent in a classical sense, but also reproduce qualitative, characteristic behavior of a physical system----such as exact preservation of conservation laws----even for very coarse discretizations. As two recent examples, we present discrete computational models of elastic rods and shells, with straightforward extensions to the viscous setting. Even at coarse discretizations, the resulting simulations capture characteristic geometric instabilities. The numerical codes we describe are used in experimental mechanics, cinema, and consumer software products. This is joint work with Miklós Bergou, Basile Audoly, Max Wardetzky, and Etienne Vouga. This research is supported in part by the Sloan Foundation, the NSF, Adobe, Autodesk, Intel, the Walt Disney Company, and Weta Digital.
Discrete Mathematics and Its Applications
ERIC Educational Resources Information Center
Oxley, Alan
2010-01-01
The article gives ideas that lecturers of undergraduate Discrete Mathematics courses can use in order to make the subject more interesting for students and encourage them to undertake further studies in the subject. It is possible to teach Discrete Mathematics with little or no reference to computing. However, students are more likely to be…
Current Density and Continuity in Discretized Models
ERIC Educational Resources Information Center
Boykin, Timothy B.; Luisier, Mathieu; Klimeck, Gerhard
2010-01-01
Discrete approaches have long been used in numerical modelling of physical systems in both research and teaching. Discrete versions of the Schrodinger equation employing either one or several basis functions per mesh point are often used by senior undergraduates and beginning graduate students in computational physics projects. In studying…
Evolution of Particle Size Distributions in Fragmentation Over Time
NASA Astrophysics Data System (ADS)
Charalambous, C. A.; Pike, W. T.
2013-12-01
We present a new model of fragmentation based on a probabilistic calculation of the repeated fracture of a particle population. The resulting continuous solution, which is in closed form, gives the evolution of fragmentation products from an initial block, through a scale-invariant power-law relationship to a final comminuted powder. Models for the fragmentation of particles have been developed separately in mainly two different disciplines: the continuous integro-differential equations of batch mineral grinding (Reid, 1965) and the fractal analysis of geophysics (Turcotte, 1986) based on a discrete model with a single probability of fracture. The first gives a time-dependent development of the particle-size distribution, but has resisted a closed-form solution, while the latter leads to the scale-invariant power laws, but with no time dependence. Bird (2009) recently introduced a bridge between these two approaches with a step-wise iterative calculation of the fragmentation products. The development of the particle-size distribution occurs with discrete steps: during each fragmentation event, the particles will repeatedly fracture probabilistically, cascading down the length scales to a final size distribution reached after all particles have failed to further fragment. We have identified this process as the equivalent to a sequence of trials for each particle with a fixed probability of fragmentation. Although the resulting distribution is discrete, it can be reformulated as a continuous distribution in maturity over time and particle size. In our model, Turcotte's power-law distribution emerges at a unique maturation index that defines a regime boundary. Up to this index, the fragmentation is in an erosional regime with the initial particle size setting the scaling. Fragmentation beyond this index is in a regime of comminution with rebreakage of the particles down to the size limit of fracture. The maturation index can increment continuously, for example under grinding conditions, or as discrete steps, such as with impact events. In both cases our model gives the energy associated with the fragmentation in terms of the developing surface area of the population. We show the agreement of our model to the evolution of particle size distributions associated with episodic and continuous fragmentation and how the evolution of some popular fractals may be represented using this approach. C. A. Charalambous and W. T. Pike (2013). Multi-Scale Particle Size Distributions of Mars, Moon and Itokawa based on a time-maturation dependent fragmentation model. Abstract Submitted to the AGU 46th Fall Meeting. Bird, N. R. A., Watts, C. W., Tarquis, A. M., & Whitmore, A. P. (2009). Modeling dynamic fragmentation of soil. Vadose Zone Journal, 8(1), 197-201. Reid, K. J. (1965). A solution to the batch grinding equation. Chemical Engineering Science, 20(11), 953-963. Turcotte, D. L. (1986). Fractals and fragmentation. Journal of Geophysical Research: Solid Earth 91(B2), 1921-1926.
NASA Astrophysics Data System (ADS)
Santosa, B.; Siswanto, N.; Fiqihesa
2018-04-01
This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution
Implementation Strategies for Large-Scale Transport Simulations Using Time Domain Particle Tracking
NASA Astrophysics Data System (ADS)
Painter, S.; Cvetkovic, V.; Mancillas, J.; Selroos, J.
2008-12-01
Time domain particle tracking is an emerging alternative to the conventional random walk particle tracking algorithm. With time domain particle tracking, particles are moved from node to node on one-dimensional pathways defined by streamlines of the groundwater flow field or by discrete subsurface features. The time to complete each deterministic segment is sampled from residence time distributions that include the effects of advection, longitudinal dispersion, a variety of kinetically controlled retention (sorption) processes, linear transformation, and temporal changes in groundwater velocities and sorption parameters. The simulation results in a set of arrival times at a monitoring location that can be post-processed with a kernel method to construct mass discharge (breakthrough) versus time. Implementation strategies differ for discrete flow (fractured media) systems and continuous porous media systems. The implementation strategy also depends on the scale at which hydraulic property heterogeneity is represented in the supporting flow model. For flow models that explicitly represent discrete features (e.g., discrete fracture networks), the sampling of residence times along segments is conceptually straightforward. For continuous porous media, such sampling needs to be related to the Lagrangian velocity field. Analytical or semi-analytical methods may be used to approximate the Lagrangian segment velocity distributions in aquifers with low-to-moderate variability, thereby capturing transport effects of subgrid velocity variability. If variability in hydraulic properties is large, however, Lagrangian velocity distributions are difficult to characterize and numerical simulations are required; in particular, numerical simulations are likely to be required for estimating the velocity integral scale as a basis for advective segment distributions. Aquifers with evolving heterogeneity scales present additional challenges. Large-scale simulations of radionuclide transport at two potential repository sites for high-level radioactive waste will be used to demonstrate the potential of the method. The simulations considered approximately 1000 source locations, multiple radionuclides with contrasting sorption properties, and abrupt changes in groundwater velocity associated with future glacial scenarios. Transport pathways linking the source locations to the accessible environment were extracted from discrete feature flow models that include detailed representations of the repository construction (tunnels, shafts, and emplacement boreholes) embedded in stochastically generated fracture networks. Acknowledgment The authors are grateful to SwRI Advisory Committee for Research, the Swedish Nuclear Fuel and Waste Management Company, and Posiva Oy for financial support.
Advances in Quantum Trajectory Approaches to Dynamics
NASA Astrophysics Data System (ADS)
Askar, Attila
2001-03-01
The quantum fluid dynamics (QFD) formulation is based on the separation of the amplitude and phase of the complex wave function in Schrodinger's equation. The approach leads to conservation laws for an equivalent "gas continuum". The Lagrangian [1] representation corresponds to following the particles of the fluid continuum, i. e. calculating "quantum trajectories". The Eulerian [2] representation on the other hand, amounts to observing the dynamics of the gas continuum at the points of a fixed coordinate frame. The combination of several factors leads to a most encouraging computational efficiency. QFD enables the numerical analysis to deal with near monotonic amplitude and phase functions. The Lagrangian description concentrates the computation effort to regions of highest probability as an optimal adaptive grid. The Eulerian representation allows the study of multi-coordinate problems as a set of one-dimensional problems within an alternating direction methodology. An explicit time integrator limits the increase in computational effort with the number of discrete points to linear. Discretization of the space via local finite elements [1,2] and global radial functions [3] will be discussed. Applications include wave packets in four-dimensional quadratic potentials and two coordinate photo-dissociation problems for NOCl and NO2. [1] "Quantum fluid dynamics (QFD) in the Lagrangian representation with applications to photo-dissociation problems", F. Sales, A. Askar and H. A. Rabitz, J. Chem. Phys. 11, 2423 (1999) [2] "Multidimensional wave-packet dynamics within the fluid dynamical formulation of the Schrodinger equation", B. Dey, A. Askar and H. A. Rabitz, J. Chem. Phys. 109, 8770 (1998) [3] "Solution of the quantum fluid dynamics equations with radial basis function interpolation", Xu-Guang Hu, Tak-San Ho, H. A. Rabitz and A. Askar, Phys. Rev. E. 61, 5967 (2000)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noyes, H.P.; Starson, S.
1991-03-01
Discrete physics, because it replaces time evolution generated by the energy operator with a global bit-string generator (program universe) and replaces fields'' with the relativistic Wheeler-Feynman action at a distance,'' allows the consistent formulation of the concept of signed gravitational charge for massive particles. The resulting prediction made by this version of the theory is that free anti-particles near the surface of the earth will fall'' up with the same acceleration that the corresponding particles fall down. So far as we can see, no current experimental information is in conflict with this prediction of our theory. The experiment crusis willmore » be one of the anti-proton or anti-hydrogen experiments at CERN. Our prediction should be much easier to test than the small effects which those experiments are currently designed to detect or bound. 23 refs.« less
NASA Astrophysics Data System (ADS)
Yang, YuGuang; Zhang, YuChen; Xu, Gang; Chen, XiuBo; Zhou, Yi-Hua; Shi, WeiMin
2018-03-01
Li et al. first proposed a quantum hash function (QHF) in a quantum-walk architecture. In their scheme, two two-particle interactions, i.e., I interaction and π-phase interaction are introduced and the choice of I or π-phase interactions at each iteration depends on a message bit. In this paper, we propose an efficient QHF by dense coding of coin operators in discrete-time quantum walk. Compared with existing QHFs, our protocol has the following advantages: the efficiency of the QHF can be doubled and even more; only one particle is enough and two-particle interactions are unnecessary so that quantum resources are saved. It is a clue to apply the dense coding technique to quantum cryptographic protocols, especially to the applications with restricted quantum resources.
Fast discrete cosine transform structure suitable for implementation with integer computation
NASA Astrophysics Data System (ADS)
Jeong, Yeonsik; Lee, Imgeun
2000-10-01
The discrete cosine transform (DCT) has wide applications in speech and image coding. We propose a fast DCT scheme with the property of reduced multiplication stages and fewer additions and multiplications. The proposed algorithm is structured so that most multiplications are performed at the final stage, which reduces the propagation error that could occur in the integer computation.
Computed Responses of Several Aircraft to Atmospheric Turbulence and Discrete Wind Shears
NASA Technical Reports Server (NTRS)
Jewell, W. F.; Stapleford, R. L.; Heffley, R. K.
1977-01-01
The computed RMS and peak responses due to atmospheric turbulence and discrete wind shears, respectively, are presented for several aircraft in different flight conditions. The responses are presented with and without the effects of a typical second order washout filter. A complete set of dimensional stability derivatives for each aircraft/flight condition combination evaluated is also presented.
Wachs, Israel E.; Cai, Yeping
2002-01-01
Preparing an aldehyde from an alcohol by contacting the alcohol in the presence of oxygen with a catalyst prepared by contacting an intimate mixture containing metal oxide support particles and particles of a catalytically active metal oxide from Groups VA, VIA, or VIIA, with a gaseous stream containing an alcohol to cause metal oxide from the discrete catalytically active metal oxide particles to migrate to the metal oxide support particles and to form a monolayer of catalytically active metal oxide on said metal oxide support particles.
Li, Tongqing; Peng, Yuxing; Zhu, Zhencai; Zou, Shengyong; Yin, Zixin
2017-01-01
Aiming at predicting what happens in reality inside mills, the contact parameters of iron ore particles for discrete element method (DEM) simulations should be determined accurately. To allow the irregular shape to be accurately determined, the sphere clump method was employed in modelling the particle shape. The inter-particle contact parameters were systematically altered whilst the contact parameters between the particle and wall were arbitrarily assumed, in order to purely assess its impact on the angle of repose for the mono-sized iron ore particles. Results show that varying the restitution coefficient over the range considered does not lead to any obvious difference in the angle of repose, but the angle of repose has strong sensitivity to the rolling/static friction coefficient. The impacts of the rolling/static friction coefficient on the angle of repose are interrelated, and increasing the inter-particle rolling/static friction coefficient can evidently increase the angle of repose. However, the impact of the static friction coefficient is more profound than that of the rolling friction coefficient. Finally, a predictive equation is established and a very close agreement between the predicted and simulated angle of repose is attained. This predictive equation can enormously shorten the inter-particle contact parameters calibration time that can help in the implementation of DEM simulations. PMID:28772880
Charge-Spot Model for Electrostatic Forces in Simulation of Fine Particulates
NASA Technical Reports Server (NTRS)
Walton, Otis R.; Johnson, Scott M.
2010-01-01
The charge-spot technique for modeling the static electric forces acting between charged fine particles entails treating electric charges on individual particles as small sets of discrete point charges, located near their surfaces. This is in contrast to existing models, which assume a single charge per particle. The charge-spot technique more accurately describes the forces, torques, and moments that act on triboelectrically charged particles, especially image-charge forces acting near conducting surfaces. The discrete element method (DEM) simulation uses a truncation range to limit the number of near-neighbor charge spots via a shifted and truncated potential Coulomb interaction. The model can be readily adapted to account for induced dipoles in uncharged particles (and thus dielectrophoretic forces) by allowing two charge spots of opposite signs to be created in response to an external electric field. To account for virtual overlap during contacts, the model can be set to automatically scale down the effective charge in proportion to the amount of virtual overlap of the charge spots. This can be accomplished by mimicking the behavior of two real overlapping spherical charge clouds, or with other approximate forms. The charge-spot method much more closely resembles real non-uniform surface charge distributions that result from tribocharging than simpler approaches, which just assign a single total charge to a particle. With the charge-spot model, a single particle may have a zero net charge, but still have both positive and negative charge spots, which could produce substantial forces on the particle when it is close to other charges, when it is in an external electric field, or when near a conducting surface. Since the charge-spot model can contain any number of charges per particle, can be used with only one or two charge spots per particle for simulating charging from solar wind bombardment, or with several charge spots for simulating triboelectric charging. Adhesive image-charge forces acting on charged particles touching conducting surfaces can be up to 50 times stronger if the charge is located in discrete spots on the particle surface instead of being distributed uniformly over the surface of the particle, as is assumed by most other models. Besides being useful in modeling particulates in space and distant objects, this modeling technique is useful for electrophotography (used in copiers) and in simulating the effects of static charge in the pulmonary delivery of fine dry powders.
China, Swarup; Scarnato, Barbara; Owen, Robert C.; ...
2015-01-14
The radiative properties of soot particles depend on their morphology and mixing state, but their evolution during transport is still elusive. In this paper, we report observations from an electron microscopy analysis of individual particles transported in the free troposphere over long distances to the remote Pico Mountain Observatory in the Azores in the North Atlantic. Approximately 70% of the soot particles were highly compact and of those 26% were thinly coated. Discrete dipole approximation simulations indicate that this compaction results in an increase in soot single scattering albedo by a factor of ≤2.17. The top of the atmosphere directmore » radiative forcing is typically smaller for highly compact than mass-equivalent lacy soot. Lastly, the forcing estimated using Mie theory is within 12% of the forcing estimated using the discrete dipole approximation for a high surface albedo, implying that Mie calculations may provide a reasonable approximation for compact soot above remote marine clouds.« less
3D DEM analyses of the 1963 Vajont rock slide
NASA Astrophysics Data System (ADS)
Boon, Chia Weng; Houlsby, Guy; Utili, Stefano
2013-04-01
The 1963 Vajont rock slide has been modelled using the distinct element method (DEM). The open-source DEM code, YADE (Kozicki & Donzé, 2008), was used together with the contact detection algorithm proposed by Boon et al. (2012). The critical sliding friction angle at the slide surface was sought using a strength reduction approach. A shear-softening contact model was used to model the shear resistance of the clayey layer at the slide surface. The results suggest that the critical sliding friction angle can be conservative if stability analyses are calculated based on the peak friction angles. The water table was assumed to be horizontal and the pore pressure at the clay layer was assumed to be hydrostatic. The influence of reservoir filling was marginal, increasing the sliding friction angle by only 1.6˚. The results of the DEM calculations were found to be sensitive to the orientations of the bedding planes and cross-joints. Finally, the failure mechanism was investigated and arching was found to be present at the bend of the chair-shaped slope. References Boon C.W., Houlsby G.T., Utili S. (2012). A new algorithm for contact detection between convex polygonal and polyhedral particles in the discrete element method. Computers and Geotechnics, vol 44, 73-82, doi.org/10.1016/j.compgeo.2012.03.012. Kozicki, J., & Donzé, F. V. (2008). A new open-source software developed for numerical simulations using discrete modeling methods. Computer Methods in Applied Mechanics and Engineering, 197(49-50), 4429-4443.
Stencil computations for PDE-based applications with examples from DUNE and hypre
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engwer, C.; Falgout, R. D.; Yang, U. M.
Here, stencils are commonly used to implement efficient on–the–fly computations of linear operators arising from partial differential equations. At the same time the term “stencil” is not fully defined and can be interpreted differently depending on the application domain and the background of the software developers. Common features in stencil codes are the preservation of the structure given by the discretization of the partial differential equation and the benefit of minimal data storage. We discuss stencil concepts of different complexity, show how they are used in modern software packages like hypre and DUNE, and discuss recent efforts to extend themore » software to enable stencil computations of more complex problems and methods such as inf–sup–stable Stokes discretizations and mixed finite element discretizations.« less
Stencil computations for PDE-based applications with examples from DUNE and hypre
Engwer, C.; Falgout, R. D.; Yang, U. M.
2017-02-24
Here, stencils are commonly used to implement efficient on–the–fly computations of linear operators arising from partial differential equations. At the same time the term “stencil” is not fully defined and can be interpreted differently depending on the application domain and the background of the software developers. Common features in stencil codes are the preservation of the structure given by the discretization of the partial differential equation and the benefit of minimal data storage. We discuss stencil concepts of different complexity, show how they are used in modern software packages like hypre and DUNE, and discuss recent efforts to extend themore » software to enable stencil computations of more complex problems and methods such as inf–sup–stable Stokes discretizations and mixed finite element discretizations.« less
Derivation and computation of discrete-delay and continuous-delay SDEs in mathematical biology.
Allen, Edward J
2014-06-01
Stochastic versions of several discrete-delay and continuous-delay differential equations, useful in mathematical biology, are derived from basic principles carefully taking into account the demographic, environmental, or physiological randomness in the dynamic processes. In particular, stochastic delay differential equation (SDDE) models are derived and studied for Nicholson's blowflies equation, Hutchinson's equation, an SIS epidemic model with delay, bacteria/phage dynamics, and glucose/insulin levels. Computational methods for approximating the SDDE models are described. Comparisons between computational solutions of the SDDEs and independently formulated Monte Carlo calculations support the accuracy of the derivations and of the computational methods.
NASA Technical Reports Server (NTRS)
1979-01-01
The current program had the objective to modify a discrete vortex wake method to efficiently compute the aerodynamic forces and moments on high fineness ratio bodies (f approximately 10.0). The approach is to increase computational efficiency by structuring the program to take advantage of new computer vector software and by developing new algorithms when vector software can not efficiently be used. An efficient program was written and substantial savings achieved. Several test cases were run for fineness ratios up to f = 16.0 and angles of attack up to 50 degrees.