An Ellipsoidal Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 1
NASA Technical Reports Server (NTRS)
Shivarama, Ravishankar; Fahrenthold, Eric P.
2004-01-01
A number of coupled particle-element and hybrid particle-element methods have been developed for the simulation of hypervelocity impact problems, to avoid certain disadvantages associated with the use of pure continuum based or pure particle based methods. To date these methods have employed spherical particles. In recent work a hybrid formulation has been extended to the ellipsoidal particle case. A model formulation approach based on Lagrange's equations, with particles entropies serving as generalized coordinates, avoids the angular momentum conservation problems which have been reported with ellipsoidal smooth particle hydrodynamics models.
Simulation tools for particle-based reaction-diffusion dynamics in continuous space
2014-01-01
Particle-based reaction-diffusion algorithms facilitate the modeling of the diffusional motion of individual molecules and the reactions between them in cellular environments. A physically realistic model, depending on the system at hand and the questions asked, would require different levels of modeling detail such as particle diffusion, geometrical confinement, particle volume exclusion or particle-particle interaction potentials. Higher levels of detail usually correspond to increased number of parameters and higher computational cost. Certain systems however, require these investments to be modeled adequately. Here we present a review on the current field of particle-based reaction-diffusion software packages operating on continuous space. Four nested levels of modeling detail are identified that capture incrementing amount of detail. Their applicability to different biological questions is discussed, arching from straight diffusion simulations to sophisticated and expensive models that bridge towards coarse grained molecular dynamics. PMID:25737778
OpenFOAM Modeling of Particle Heating and Acceleration in Cold Spraying
NASA Astrophysics Data System (ADS)
Leitz, K.-H.; O'Sullivan, M.; Plankensteiner, A.; Kestler, H.; Sigl, L. S.
2018-01-01
In cold spraying, a powder material is accelerated and heated in the gas flow of a supersonic nozzle to velocities and temperatures that are sufficient to obtain cohesion of the particles to a substrate. The deposition efficiency of the particles is significantly determined by their velocity and temperature. Particle velocity correlates with the amount of kinetic energy that is converted to plastic deformation and thermal heating. The initial particle temperature significantly influences the mechanical properties of the particle. Velocity and temperature of the particles have nonlinear dependence on the pressure and temperature of the gas at the nozzle entrance. In this contribution, a simulation model based on the reactingParcelFoam solver of OpenFOAM is presented and applied for an analysis of particle velocity and temperature in the cold spray nozzle. The model combines a compressible description of the gas flow in the nozzle with a Lagrangian particle tracking. The predictions of the simulation model are verified based on an analytical description of the gas flow, the particle acceleration and heating in the nozzle. Based on experimental data, the drag model according to Plessis and Masliyah is identified to be best suited for OpenFOAM modeling particle heating and acceleration in cold spraying.
NASA Astrophysics Data System (ADS)
Conny, Joseph M.; Ortiz-Montalvo, Diana L.
2017-09-01
We show the effect of composition heterogeneity and shape on the optical properties of urban dust particles based on the three-dimensional spatial and optical modeling of individual particles. Using scanning electron microscopy/energy-dispersive X-ray spectroscopy (SEM/EDX) and focused ion beam (FIB) tomography, spatial models of particles collected in Los Angeles and Seattle accounted for surface features, inclusions, and voids, as well as overall composition and shape. Using voxel data from the spatial models and the discrete dipole approximation method, we report extinction efficiency, asymmetry parameter, and single-scattering albedo (SSA). Test models of the particles involved (1) the particle's actual morphology as a single homogeneous phase and (2) simple geometric shapes (spheres, cubes, and tetrahedra) depicting composition homogeneity or heterogeneity (with multiple spheres). Test models were compared with a reference model, which included the particle's actual morphology and heterogeneity based on SEM/EDX and FIB tomography. Results show particle shape to be a more important factor for determining extinction efficiency than accounting for individual phases in a particle, regardless of whether absorption or scattering dominated. In addition to homogeneous models with the particles' actual morphology, tetrahedral geometric models provided better extinction accuracy than spherical or cubic models. For iron-containing heterogeneous particles, the asymmetry parameter and SSA varied with the composition of the iron-containing phase, even if the phase was <10% of the particle volume. For particles containing loosely held phases with widely varying refractive indexes (i.e., exhibiting "severe" heterogeneity), only models that account for heterogeneity may sufficiently determine SSA.
Pre- and Post-Processing Tools to Create and Characterize Particle-Based Composite Model Structures
2017-11-01
ARL-TR-8213 ● NOV 2017 US Army Research Laboratory Pre- and Post -Processing Tools to Create and Characterize Particle-Based...ARL-TR-8213 ● NOV 2017 US Army Research Laboratory Pre- and Post -Processing Tools to Create and Characterize Particle-Based Composite...AND SUBTITLE Pre- and Post -Processing Tools to Create and Characterize Particle-Based Composite Model Structures 5a. CONTRACT NUMBER 5b. GRANT
Collision Models for Particle Orbit Code on SSX
NASA Astrophysics Data System (ADS)
Fisher, M. W.; Dandurand, D.; Gray, T.; Brown, M. R.; Lukin, V. S.
2011-10-01
Coulomb collision models are being developed and incorporated into the Hamiltonian particle pushing code (PPC) for applications to the Swarthmore Spheromak eXperiment (SSX). A Monte Carlo model based on that of Takizuka and Abe [JCP 25, 205 (1977)] performs binary collisions between test particles and thermal plasma field particles randomly drawn from a stationary Maxwellian distribution. A field-based electrostatic fluctuation model scatters particles from a spatially uniform random distribution of positive and negative spherical potentials generated throughout the plasma volume. The number, radii, and amplitude of these potentials are chosen to mimic the correct particle diffusion statistics without the use of random particle draws or collision frequencies. An electromagnetic fluctuating field model will be presented, if available. These numerical collision models will be benchmarked against known analytical solutions, including beam diffusion rates and Spitzer resistivity, as well as each other. The resulting collisional particle orbit models will be used to simulate particle collection with electrostatic probes in the SSX wind tunnel, as well as particle confinement in typical SSX fields. This work has been supported by US DOE, NSF and ONR.
Numerical investigation of compaction of deformable particles with bonded-particle model
NASA Astrophysics Data System (ADS)
Dosta, Maksym; Costa, Clara; Al-Qureshi, Hazim
2017-06-01
In this contribution, a novel approach developed for the microscale modelling of particles which undergo large deformations is presented. The proposed method is based on the bonded-particle model (BPM) and multi-stage strategy to adjust material and model parameters. By the BPM, modelled objects are represented as agglomerates which consist of smaller ideally spherical particles and are connected with cylindrical solid bonds. Each bond is considered as a separate object and in each time step the forces and moments acting in them are calculated. The developed approach has been applied to simulate the compaction of elastomeric rubber particles as single particles or in a random packing. To describe the complex mechanical behaviour of the particles, the solid bonds were modelled as ideally elastic beams. The functional parameters of solid bonds as well as material parameters of bonds and primary particles were estimated based on the experimental data for rubber spheres. Obtained results for acting force and for particle deformations during uniaxial compression are in good agreement with experimental data at higher strains.
Xia, Kelin
2017-12-20
In this paper, a multiscale virtual particle based elastic network model (MVP-ENM) is proposed for the normal mode analysis of large-sized biomolecules. The multiscale virtual particle (MVP) model is proposed for the discretization of biomolecular density data. With this model, large-sized biomolecular structures can be coarse-grained into virtual particles such that a balance between model accuracy and computational cost can be achieved. An elastic network is constructed by assuming "connections" between virtual particles. The connection is described by a special harmonic potential function, which considers the influence from both the mass distributions and distance relations of the virtual particles. Two independent models, i.e., the multiscale virtual particle based Gaussian network model (MVP-GNM) and the multiscale virtual particle based anisotropic network model (MVP-ANM), are proposed. It has been found that in the Debye-Waller factor (B-factor) prediction, the results from our MVP-GNM with a high resolution are as good as the ones from GNM. Even with low resolutions, our MVP-GNM can still capture the global behavior of the B-factor very well with mismatches predominantly from the regions with large B-factor values. Further, it has been demonstrated that the low-frequency eigenmodes from our MVP-ANM are highly consistent with the ones from ANM even with very low resolutions and a coarse grid. Finally, the great advantage of MVP-ANM model for large-sized biomolecules has been demonstrated by using two poliovirus virus structures. The paper ends with a conclusion.
NASA Astrophysics Data System (ADS)
Barnes, Brian C.; Leiter, Kenneth W.; Becker, Richard; Knap, Jaroslaw; Brennan, John K.
2017-07-01
We describe the development, accuracy, and efficiency of an automation package for molecular simulation, the large-scale atomic/molecular massively parallel simulator (LAMMPS) integrated materials engine (LIME). Heuristics and algorithms employed for equation of state (EOS) calculation using a particle-based model of a molecular crystal, hexahydro-1,3,5-trinitro-s-triazine (RDX), are described in detail. The simulation method for the particle-based model is energy-conserving dissipative particle dynamics, but the techniques used in LIME are generally applicable to molecular dynamics simulations with a variety of particle-based models. The newly created tool set is tested through use of its EOS data in plate impact and Taylor anvil impact continuum simulations of solid RDX. The coarse-grain model results from LIME provide an approach to bridge the scales from atomistic simulations to continuum simulations.
Scaling and modeling of turbulent suspension flows
NASA Technical Reports Server (NTRS)
Chen, C. P.
1989-01-01
Scaling factors determining various aspects of particle-fluid interactions and the development of physical models to predict gas-solid turbulent suspension flow fields are discussed based on two-fluid, continua formulation. The modes of particle-fluid interactions are discussed based on the length and time scale ratio, which depends on the properties of the particles and the characteristics of the flow turbulence. For particle size smaller than or comparable with the Kolmogorov length scale and concentration low enough for neglecting direct particle-particle interaction, scaling rules can be established in various parameter ranges. The various particle-fluid interactions give rise to additional mechanisms which affect the fluid mechanics of the conveying gas phase. These extra mechanisms are incorporated into a turbulence modeling method based on the scaling rules. A multiple-scale two-phase turbulence model is developed, which gives reasonable predictions for dilute suspension flow. Much work still needs to be done to account for the poly-dispersed effects and the extension to dense suspension flows.
A Hybrid Physics-Based Data-Driven Approach for Point-Particle Force Modeling
NASA Astrophysics Data System (ADS)
Moore, Chandler; Akiki, Georges; Balachandar, S.
2017-11-01
This study improves upon the physics-based pairwise interaction extended point-particle (PIEP) model. The PIEP model leverages a physical framework to predict fluid mediated interactions between solid particles. While the PIEP model is a powerful tool, its pairwise assumption leads to increased error in flows with high particle volume fractions. To reduce this error, a regression algorithm is used to model the differences between the current PIEP model's predictions and the results of direct numerical simulations (DNS) for an array of monodisperse solid particles subjected to various flow conditions. The resulting statistical model and the physical PIEP model are superimposed to construct a hybrid, physics-based data-driven PIEP model. It must be noted that the performance of a pure data-driven approach without the model-form provided by the physical PIEP model is substantially inferior. The hybrid model's predictive capabilities are analyzed using more DNS. In every case tested, the hybrid PIEP model's prediction are more accurate than those of physical PIEP model. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1315138 and the U.S. DOE, NNSA, ASC Program, as a Cooperative Agreement under Contract No. DE-NA0002378.
Exact Hybrid Particle/Population Simulation of Rule-Based Models of Biochemical Systems
Stover, Lori J.; Nair, Niketh S.; Faeder, James R.
2014-01-01
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This “network-free” approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of “partial network expansion” into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility. PMID:24699269
Exact hybrid particle/population simulation of rule-based models of biochemical systems.
Hogg, Justin S; Harris, Leonard A; Stover, Lori J; Nair, Niketh S; Faeder, James R
2014-04-01
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility.
A Process-Based Transport-Distance Model of Aeolian Transport
NASA Astrophysics Data System (ADS)
Naylor, A. K.; Okin, G.; Wainwright, J.; Parsons, A. J.
2017-12-01
We present a new approach to modeling aeolian transport based on transport distance. Particle fluxes are based on statistical probabilities of particle detachment and distributions of transport lengths, which are functions of particle size classes. A computational saltation model is used to simulate transport distances over a variety of sizes. These are fit to an exponential distribution, which has the advantages of computational economy, concordance with current field measurements, and a meaningful relationship to theoretical assumptions about mean and median particle transport distance. This novel approach includes particle-particle interactions, which are important for sustaining aeolian transport and dust emission. Results from this model are compared with results from both bulk- and particle-sized-specific transport equations as well as empirical wind tunnel studies. The transport-distance approach has been successfully used for hydraulic processes, and extending this methodology from hydraulic to aeolian transport opens up the possibility of modeling joint transport by wind and water using consistent physics. Particularly in nutrient-limited environments, modeling the joint action of aeolian and hydraulic transport is essential for understanding the spatial distribution of biomass across landscapes and how it responds to climatic variability and change.
Kinematic Model of Transient Shape-Induced Anisotropy in Dense Granular Flow
NASA Astrophysics Data System (ADS)
Nadler, B.; Guillard, F.; Einav, I.
2018-05-01
Nonspherical particles are ubiquitous in nature and industry, yet previous theoretical models of granular media are mostly limited to systems of spherical particles. The problem is that in systems of nonspherical anisotropic particles, dynamic particle alignment critically affects their mechanical response. To study the tendency of such particles to align, we propose a simple kinematic model that relates the flow to the evolution of particle alignment with respect to each other. The validity of the proposed model is supported by comparison with particle-based simulations for various particle shapes ranging from elongated rice-like (prolate) to flattened lentil-like (oblate) particles. The model shows good agreement with the simulations for both steady-state and transient responses, and advances the development of comprehensive constitutive models for shape-anisotropic particles.
Maltesen, Morten Jonas; van de Weert, Marco; Grohganz, Holger
2012-09-01
Moisture content and aerodynamic particle size are critical quality attributes for spray-dried protein formulations. In this study, spray-dried insulin powders intended for pulmonary delivery were produced applying design of experiments methodology. Near infrared spectroscopy (NIR) in combination with preprocessing and multivariate analysis in the form of partial least squares projections to latent structures (PLS) were used to correlate the spectral data with moisture content and aerodynamic particle size measured by a time of flight principle. PLS models predicting the moisture content were based on the chemical information of the water molecules in the NIR spectrum. Models yielded prediction errors (RMSEP) between 0.39% and 0.48% with thermal gravimetric analysis used as reference method. The PLS models predicting the aerodynamic particle size were based on baseline offset in the NIR spectra and yielded prediction errors between 0.27 and 0.48 μm. The morphology of the spray-dried particles had a significant impact on the predictive ability of the models. Good predictive models could be obtained for spherical particles with a calibration error (RMSECV) of 0.22 μm, whereas wrinkled particles resulted in much less robust models with a Q (2) of 0.69. Based on the results in this study, NIR is a suitable tool for process analysis of the spray-drying process and for control of moisture content and particle size, in particular for smooth and spherical particles.
Anomalous diffusion for bed load transport with a physically-based model
NASA Astrophysics Data System (ADS)
Fan, N.; Singh, A.; Foufoula-Georgiou, E.; Wu, B.
2013-12-01
Diffusion of bed load particles shows both normal and anomalous behavior for different spatial-temporal scales. Understanding and quantifying these different types of diffusion is important not only for the development of theoretical models of particle transport but also for practical purposes, e.g., river management. Here we extend a recently proposed physically-based model of particle transport by Fan et al. [2013] to further develop an Episodic Langevin equation (ELE) for individual particle motion which reproduces the episodic movement (start and stop) of sediment particles. Using the proposed ELE we simulate particle movements for a large number of uniform size particles, incorporating different probability distribution functions (PDFs) of particle waiting time. For exponential PDFs of waiting times, particles reveal ballistic motion in short time scales and turn to normal diffusion at long time scales. The PDF of simulated particle travel distances also shows a change in its shape from exponential to Gamma to Gaussian with a change in timescale implying different diffusion scaling regimes. For power-law PDF (with power - μ) of waiting times, the asymptotic behavior of particles at long time scales reveals both super-diffusion and sub-diffusion, however, only very heavy tailed waiting times (i.e. 1.0 < μ < 1.5) could result in sub-diffusion. We suggest that the contrast between our results and previous studies (for e.g., studies based on fractional advection-diffusion models of thin/heavy tailed particle hops and waiting times) results could be due the assumption in those studies that the hops are achieved instantaneously, but in reality, particles achieve their hops within finite times (as we simulate here) instead of instantaneously, even if the hop times are much shorter than waiting times. In summary, this study stresses on the need to rethink the alternative models to the previous models, such as, fractional advection-diffusion equations, for studying the anomalous diffusion of bed load particles. The implications of these results for modeling sediment transport are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeh, Hsu-Chi; Phalen, R.F.; Chang, I.
1995-12-01
The National Council on Radiation Protection and Measurements (NCRP) in the United States and the International Commission on Radiological Protection (ICRP) have been independently reviewing and revising respiratory tract dosimetry models for inhaled radioactive aerosols. The newly proposed NCRP respiratory tract dosimetry model represents a significant change in philosophy from the old ICRP Task Group model. The proposed NCRP model describes respiratory tract deposition, clearance, and dosimetry for radioactive substances inhaled by workers and the general public and is expected to be published soon. In support of the NCRP proposed model, ITRI staff members have been developing computer software. Althoughmore » this software is still incomplete, the deposition portion has been completed and can be used to calculate inhaled particle deposition within the respiratory tract for particle sizes as small as radon and radon progeny ({approximately} 1 nm) to particles larger than 100 {mu}m. Recently, ICRP published their new dosimetric model for the respiratory tract, ICRP66. Based on ICRP66, the National Radiological Protection Board of the UK developed PC-based software, LUDEP, for calculating particle deposition and internal doses. The purpose of this report is to compare the calculated respiratory tract deposition of particles using the NCRP/ITRI model and the ICRP66 model, under the same particle size distribution and breathing conditions. In summary, the general trends of the deposition curves for the two models were similar.« less
NASA Astrophysics Data System (ADS)
Conny, J. M.; Ortiz-Montalvo, D. L.
2017-12-01
In the remote sensing of atmospheric aerosols, coarse-mode dust particles are often modeled optically as a collection of spheroids. However, atmospheric particles rarely resemble simplified shapes such as spheroids. Moreover, individual particles often have a heterogenous composition and may not be sufficiently modeled as a single material. In this work, we determine the optical properties of dust particles based on 3-dimensional models of individual particles from focused ion-beam (FIB) tomography. We compare the optical properties of the actual particles with the particles as simplified shapes including one or more spheres, an ellipsoid, cube, rectangular prism, or tetrahedron. FIB tomography is performed with a scanning electron microscope equipped with an ion-beam column. The ion beam slices through the particle incrementally as the electron beam images each slice. Element maps of the particle may be acquired with energy-dispersive x-ray spectroscopy. The images and maps are used to create the 3-D spatial model, from which the discrete dipole approximation method is used to calculate extinction, single scattering albedo, asymmetry parameter, and the phase function. Models of urban dust show that shape is generally more important than accounting for composition heterogeneity. However, if a particle has material phases with widely-varying refractive indexes, a geometric model may be insufficient if it does not incorporate heterogeneity. Models of Asian dust show that geometric models generally exhibit lower extinction efficiencies than the actual particles suggesting that simplified models do not adequately account for particle surface roughness. Nevertheless, in most cases the extinction from the tetrahedron model comes closest to that of the actual particles suggesting that accounting for particle angularity is important. The phase function from the tetrahedron model is comparable to the ellipsoid model and generally close to the actual particle, particularly in the backscatter direction (90° to 180°). Current work focuses on optical models of particles with a strongly-absorbing soot phase attached to a scattering mineral phase.
Estimating Colloidal Contact Model Parameters Using Quasi-Static Compression Simulations.
Bürger, Vincent; Briesen, Heiko
2016-10-05
For colloidal particles interacting in suspensions, clusters, or gels, contact models should attempt to include all physical phenomena experimentally observed. One critical point when formulating a contact model is to ensure that the interaction parameters can be easily obtained from experiments. Experimental determinations of contact parameters for particles either are based on bulk measurements for simulations on the macroscopic scale or require elaborate setups for obtaining tangential parameters such as using atomic force microscopy. However, on the colloidal scale, a simple method is required to obtain all interaction parameters simultaneously. This work demonstrates that quasi-static compression of a fractal-like particle network provides all the necessary information to obtain particle interaction parameters using a simple spring-based contact model. These springs provide resistances against all degrees of freedom associated with two-particle interactions, and include critical forces or moments where such springs break, indicating a bond-breakage event. A position-based cost function is introduced to show the identifiability of the two-particle contact parameters, and a discrete, nonlinear, and non-gradient-based global optimization method (simplex with simulated annealing, SIMPSA) is used to minimize the cost function calculated from deviations of particle positions. Results show that, in principle, all necessary contact parameters for an arbitrary particle network can be identified, although numerical efficiency as well as experimental noise must be addressed when applying this method. Such an approach lays the groundwork for identifying particle-contact parameters from a position-based particle analysis for a colloidal system using just one experiment. Spring constants also directly influence the time step of the discrete-element method, and a detailed knowledge of all necessary interaction parameters will help to improve the efficiency of colloidal particle simulations.
On the modeling of the 2010 Gulf of Mexico Oil Spill
NASA Astrophysics Data System (ADS)
Mariano, A. J.; Kourafalou, V. H.; Srinivasan, A.; Kang, H.; Halliwell, G. R.; Ryan, E. H.; Roffer, M.
2011-09-01
Two oil particle trajectory forecasting systems were developed and applied to the 2010 Deepwater Horizon Oil Spill in the Gulf of Mexico. Both systems use ocean current fields from high-resolution numerical ocean circulation model simulations, Lagrangian stochastic models to represent unresolved sub-grid scale variability to advect oil particles, and Monte Carlo-based schemes for representing uncertain biochemical and physical processes. The first system assumes two-dimensional particle motion at the ocean surface, the oil is in one state, and the particle removal is modeled as a Monte Carlo process parameterized by a one number removal rate. Oil particles are seeded using both initial conditions based on observations and particles released at the location of the Maconda well. The initial conditions (ICs) of oil particle location for the two-dimensional surface oil trajectory forecasts are based on a fusing of all available information including satellite-based analyses. The resulting oil map is digitized into a shape file within which a polygon filling software generates longitude and latitude with variable particle density depending on the amount of oil present in the observations for the IC. The more complex system assumes three (light, medium, heavy) states for the oil, each state has a different removal rate in the Monte Carlo process, three-dimensional particle motion, and a particle size-dependent oil mixing model. Simulations from the two-dimensional forecast system produced results that qualitatively agreed with the uncertain "truth" fields. These simulations validated the use of our Monte Carlo scheme for representing oil removal by evaporation and other weathering processes. Eulerian velocity fields for predicting particle motion from data-assimilative models produced better particle trajectory distributions than a free running model with no data assimilation. Monte Carlo simulations of the three-dimensional oil particle trajectory, whose ensembles were generated by perturbing the size of the oil particles and the fraction in a given size range that are released at depth, the two largest unknowns in this problem. 36 realizations of the model were run with only subsurface oil releases. An average of these results yields that after three months, about 25% of the oil remains in the water column and that most of the oil is below 800 m.
Modeling of the rough spherical nanoparticles manipulation on a substrate based on the AFM nanorobot
NASA Astrophysics Data System (ADS)
Zakeri, M.; Faraji, J.
2014-12-01
In this paper, dynamic behavior of the rough spherical micro/nanoparticles during pulling/pushing on the flat substrate has been investigated and analyzed. For this purpose, at first, two hexagonal roughness models (George and Cooper) were studied and then evaluations for adhesion force were determined for rough particle manipulation on flat substrate. These two models were then changed by using of the Rabinovich theory. Evaluations were determined for contact adhesion force between rough particle and flat substrate; depth of penetration evaluations were determined by the Johnson-Kendall-Roberts contact mechanic theory and the Schwartz method and according to Cooper and George roughness models. Then, the novel contact theory was used to determine a dynamic model for rough micro/nanoparticle manipulation on flat substrate. Finally, simulation of particle dynamic behavior was implemented during pushing of rough spherical gold particles with radii of 50, 150, 400, 600, and 1,000 nm. Results derived from simulations of particles with several rates of roughness on flat substrate indicated that compared to results for flat particles, inherent roughness on particles might reduce the rate of critical force needed for sliding and rolling given particles. Given a fixed radius for roughness value and increased roughness height, evaluations for sliding and rolling critical forces showed greater reduction. Alternately, the rate of critical force was shown to reduce relative to an increased roughness radius. With respect to both models, based on the George roughness model, the predicted rate of adhesion force was greater than that determined in the Cooper roughness model, and as a result, the predicted rate of critical force based on the George roughness model was closer to the critical force value of flat particle.
Particle-based membrane model for mesoscopic simulation of cellular dynamics
NASA Astrophysics Data System (ADS)
Sadeghi, Mohsen; Weikl, Thomas R.; Noé, Frank
2018-01-01
We present a simple and computationally efficient coarse-grained and solvent-free model for simulating lipid bilayer membranes. In order to be used in concert with particle-based reaction-diffusion simulations, the model is purely based on interacting and reacting particles, each representing a coarse patch of a lipid monolayer. Particle interactions include nearest-neighbor bond-stretching and angle-bending and are parameterized so as to reproduce the local membrane mechanics given by the Helfrich energy density over a range of relevant curvatures. In-plane fluidity is implemented with Monte Carlo bond-flipping moves. The physical accuracy of the model is verified by five tests: (i) Power spectrum analysis of equilibrium thermal undulations is used to verify that the particle-based representation correctly captures the dynamics predicted by the continuum model of fluid membranes. (ii) It is verified that the input bending stiffness, against which the potential parameters are optimized, is accurately recovered. (iii) Isothermal area compressibility modulus of the membrane is calculated and is shown to be tunable to reproduce available values for different lipid bilayers, independent of the bending rigidity. (iv) Simulation of two-dimensional shear flow under a gravity force is employed to measure the effective in-plane viscosity of the membrane model and show the possibility of modeling membranes with specified viscosities. (v) Interaction of the bilayer membrane with a spherical nanoparticle is modeled as a test case for large membrane deformations and budding involved in cellular processes such as endocytosis. The results are shown to coincide well with the predicted behavior of continuum models, and the membrane model successfully mimics the expected budding behavior. We expect our model to be of high practical usability for ultra coarse-grained molecular dynamics or particle-based reaction-diffusion simulations of biological systems.
Uncertainty quantification in Eulerian-Lagrangian models for particle-laden flows
NASA Astrophysics Data System (ADS)
Fountoulakis, Vasileios; Jacobs, Gustaaf; Udaykumar, Hs
2017-11-01
A common approach to ameliorate the computational burden in simulations of particle-laden flows is to use a point-particle based Eulerian-Lagrangian model, which traces individual particles in their Lagrangian frame and models particles as mathematical points. The particle motion is determined by Stokes drag law, which is empirically corrected for Reynolds number, Mach number and other parameters. The empirical corrections are subject to uncertainty. Treating them as random variables renders the coupled system of PDEs and ODEs stochastic. An approach to quantify the propagation of this parametric uncertainty to the particle solution variables is proposed. The approach is based on averaging of the governing equations and allows for estimation of the first moments of the quantities of interest. We demonstrate the feasibility of our proposed methodology of uncertainty quantification of particle-laden flows on one-dimensional linear and nonlinear Eulerian-Lagrangian systems. This research is supported by AFOSR under Grant FA9550-16-1-0008.
Convergence of the Bouguer-Beer law for radiation extinction in particulate media
NASA Astrophysics Data System (ADS)
Frankel, A.; Iaccarino, G.; Mani, A.
2016-10-01
Radiation transport in particulate media is a common physical phenomenon in natural and industrial processes. Developing predictive models of these processes requires a detailed model of the interaction between the radiation and the particles. Resolving the interaction between the radiation and the individual particles in a very large system is impractical, whereas continuum-based representations of the particle field lend themselves to efficient numerical techniques based on the solution of the radiative transfer equation. We investigate radiation transport through discrete and continuum-based representations of a particle field. Exact solutions for radiation extinction are developed using a Monte Carlo model in different particle distributions. The particle distributions are then projected onto a concentration field with varying grid sizes, and the Bouguer-Beer law is applied by marching across the grid. We show that the continuum-based solution approaches the Monte Carlo solution under grid refinement, but quickly diverges as the grid size approaches the particle diameter. This divergence is attributed to the homogenization error of an individual particle across a whole grid cell. We remark that the concentration energy spectrum of a point-particle field does not approach zero, and thus the concentration variance must also diverge under infinite grid refinement, meaning that no grid-converged solution of the radiation transport is possible.
Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane
2017-06-21
Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane
Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less
Modeling the migration of platinum nanoparticles on surfaces using a kinetic Monte Carlo approach
Li, Lin; Plessow, Philipp N.; Rieger, Michael; ...
2017-02-15
We propose a kinetic Monte Carlo (kMC) model for simulating the movement of platinum particles on supports, based on atom-by-atom diffusion on the surface of the particle. The proposed model was able to reproduce equilibrium cluster shapes predicted using Wulff-construction. The diffusivity of platinum particles was simulated both purely based on random motion and assisted using an external field that causes a drift velocity. The overall particle diffusivity increases with temperature; however, the extracted activation barrier appears to be temperature independent. Additionally, this barrier was found to increase with particle size, as well as, with the adhesion between the particlemore » and the support.« less
NASA Technical Reports Server (NTRS)
Mitchell, David L.; Chai, Steven K.; Dong, Yayi; Arnott, W. Patrick; Hallett, John
1993-01-01
The 1 November 1986 FIRE I case study was used to test an ice particle growth model which predicts bimodal size spectra in cirrus clouds. The model was developed from an analytically based model which predicts the height evolution of monomodal ice particle size spectra from the measured ice water content (IWC). Size spectra from the monomodal model are represented by a gamma distribution, N(D) = N(sub o)D(exp nu)exp(-lambda D), where D = ice particle maximum dimension. The slope parameter, lambda, and the parameter N(sub o) are predicted from the IWC through the growth processes of vapor diffusion and aggregation. The model formulation is analytical, computationally efficient, and well suited for incorporation into larger models. The monomodal model has been validated against two other cirrus cloud case studies. From the monomodal size spectra, the size distributions which determine concentrations of ice particles less than about 150 mu m are predicted.
Deep particle bed dryout model based on flooding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuan, P.
1987-01-01
Examination of the damaged Three Mile island Unit 2 (TMI-2) reactor indicates that a deep (approx. 1-m) bed of relatively large (approx. 1-mm) particles was formed in the core. Cooling of such beds is crucial to the arrest of core damage progression. The Lipinski model, based on flows in the bed, has been used to predict the coolability, but uncertainties exist in the turbulent permeability. Models based on flooding at the top of the bed either have a dimensional viscosity term, or no viscosity dependence, thus limiting their applicability. This paper presents a dimensionless correlation based on flooding data thatmore » involves a liquid Reynolds number. The derived dryout model from this correlation is compared with data for deep beds of large particles at atmospheric pressure, and with other models over a wide pressure range. It is concluded that the present model can give quite accurate predictions for the dryout heat flux of particle beds formed during a light water reactor accident and it is easy to use and agrees with the Lipinski n = 5 model, which requires iterative calculations.« less
Bittig, Arne T; Uhrmacher, Adelinde M
2017-01-01
Spatio-temporal dynamics of cellular processes can be simulated at different levels of detail, from (deterministic) partial differential equations via the spatial Stochastic Simulation algorithm to tracking Brownian trajectories of individual particles. We present a spatial simulation approach for multi-level rule-based models, which includes dynamically hierarchically nested cellular compartments and entities. Our approach ML-Space combines discrete compartmental dynamics, stochastic spatial approaches in discrete space, and particles moving in continuous space. The rule-based specification language of ML-Space supports concise and compact descriptions of models and to adapt the spatial resolution of models easily.
Improved particle position accuracy from off-axis holograms using a Chebyshev model.
Öhman, Johan; Sjödahl, Mikael
2018-01-01
Side scattered light from micrometer-sized particles is recorded using an off-axis digital holographic setup. From holograms, a volume is reconstructed with information about both intensity and phase. Finding particle positions is non-trivial, since poor axial resolution elongates particles in the reconstruction. To overcome this problem, the reconstructed wavefront around a particle is used to find the axial position. The method is based on the change in the sign of the curvature around the true particle position plane. The wavefront curvature is directly linked to the phase response in the reconstruction. In this paper we propose a new method of estimating the curvature based on a parametric model. The model is based on Chebyshev polynomials and is fit to the phase anomaly and compared to a plane wave in the reconstructed volume. From the model coefficients, it is possible to find particle locations. Simulated results show increased performance in the presence of noise, compared to the use of finite difference methods. The standard deviation is decreased from 3-39 μm to 6-10 μm for varying noise levels. Experimental results show a corresponding improvement where the standard deviation is decreased from 18 μm to 13 μm.
Three-phase boundary length in solid-oxide fuel cells: A mathematical model
NASA Astrophysics Data System (ADS)
Janardhanan, Vinod M.; Heuveline, Vincent; Deutschmann, Olaf
A mathematical model to calculate the volume specific three-phase boundary length in the porous composite electrodes of solid-oxide fuel cell is presented. The model is exclusively based on geometrical considerations accounting for porosity, particle diameter, particle size distribution, and solids phase distribution. Results are presented for uniform particle size distribution as well as for non-uniform particle size distribution.
Coupled Particle Transport and Pattern Formation in a Nonlinear Leaky-Box Model
NASA Technical Reports Server (NTRS)
Barghouty, A. F.; El-Nemr, K. W.; Baird, J. K.
2009-01-01
Effects of particle-particle coupling on particle characteristics in nonlinear leaky-box type descriptions of the acceleration and transport of energetic particles in space plasmas are examined in the framework of a simple two-particle model based on the Fokker-Planck equation in momentum space. In this model, the two particles are assumed coupled via a common nonlinear source term. In analogy with a prototypical mathematical system of diffusion-driven instability, this work demonstrates that steady-state patterns with strong dependence on the magnetic turbulence but a rather weak one on the coupled particles attributes can emerge in solutions of a nonlinearly coupled leaky-box model. The insight gained from this simple model may be of wider use and significance to nonlinearly coupled leaky-box type descriptions in general.
Optical depth in particle-laden turbulent flows
NASA Astrophysics Data System (ADS)
Frankel, A.; Iaccarino, G.; Mani, A.
2017-11-01
Turbulent clustering of particles causes an increase in the radiation transmission through gas-particle mixtures. Attempts to capture the ensemble-averaged transmission lead to a closure problem called the turbulence-radiation interaction. A simple closure model based on the particle radial distribution function is proposed to capture the effect of turbulent fluctuations in the concentration on radiation intensity. The model is validated against a set of particle-resolved ray tracing experiments through particle fields from direct numerical simulations of particle-laden turbulence. The form of the closure model is generalizable to arbitrary stochastic media with known two-point correlation functions.
Minimizing Concentration Effects in Water-Based, Laminar-Flow Condensation Particle Counters
Lewis, Gregory S.; Hering, Susanne V.
2013-01-01
Concentration effects in water condensation systems, such as used in the water-based condensation particle counter, are explored through numeric modeling and direct measurements. Modeling shows that the condensation heat release and vapor depletion associated with particle activation and growth lowers the peak supersaturation. At higher number concentrations, the diameter of the droplets formed is smaller, and the threshold particle size for activation is higher. This occurs in both cylindrical and parallel plate geometries. For water-based systems we find that condensational heat release is more important than is vapor depletion. We also find that concentration effects can be minimized through use of smaller tube diameters, or more closely spaced parallel plates. Experimental measurements of droplet diameter confirm modeling results. PMID:24436507
Frazier, Zachary
2012-01-01
Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237
Magnetic particle-scanning for ultrasensitive immunodetection on-chip.
Cornaglia, Matteo; Trouillon, Raphaël; Tekin, H Cumhur; Lehnert, Thomas; Gijs, Martin A M
2014-08-19
We describe the concept of magnetic particle-scanning for on-chip detection of biomolecules: a magnetic particle, carrying a low number of antigens (Ag's) (down to a single molecule), is transported by hydrodynamic forces and is subjected to successive stochastic reorientations in an engineered magnetic energy landscape. The latter consists of a pattern of substrate-bound small magnetic particles that are functionalized with antibodies (Ab's). Subsequationuent counting of the captured Ag-carrying particles provides the detection signal. The magnetic particle-scanning principle is investigated in a custom-built magneto-microfluidic chip and theoretically described by a random walk-based model, in which the trajectory of the contact point between an Ag-carrying particle and the small magnetic particle pattern is described by stochastic moves over the surface of the mobile particle, until this point coincides with the position of an Ag, resulting in the binding of the particle. This model explains the particular behavior of previously reported experimental dose-response curves obtained for two different ligand-receptor systems (biotin/streptavidin and TNF-α) over a wide range of concentrations. Our model shows that magnetic particle-scanning results in a very high probability of immunocomplex formation for very low Ag concentrations, leading to an extremely low limit of detection, down to the single molecule-per-particle level. When compared to other types of magnetic particle-based surface coverage assays, our strategy was found to offer a wider dynamic range (>8 orders of magnitude), as the system does not saturate for concentrations as high as 10(11) Ag molecules in a 5 μL drop. Furthermore, by emphasizing the importance of maximizing the encounter probability between the Ag and the Ab to improve sensitivity, our model also contributes to explaining the behavior of other particle-based heterogeneous immunoassays.
Hybrid modeling method for a DEP based particle manipulation.
Miled, Mohamed Amine; Gagne, Antoine; Sawan, Mohamad
2013-01-30
In this paper, a new modeling approach for Dielectrophoresis (DEP) based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results.
Hybrid Modeling Method for a DEP Based Particle Manipulation
Miled, Mohamed Amine; Gagne, Antoine; Sawan, Mohamad
2013-01-01
In this paper, a new modeling approach for Dielectrophoresis (DEP) based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results. PMID:23364197
Numerical study on turbulence modulation in gas-particle flows
NASA Astrophysics Data System (ADS)
Yan, F.; Lightstone, M. F.; Wood, P. E.
2007-01-01
A mathematical model is proposed based on the Eulerian/Lagrangian approach to account for both the particle crossing trajectory effect and the extra turbulence production due to particle wake effects. The resulting model, together with existing models from the literature, is applied to two different particle-laden flow configurations, namely a vertical pipe flow and axisymmetric downward jet flow. The results show that the proposed model is able to provide improved predictions of the experimental results.
NASA Astrophysics Data System (ADS)
Lee, Eon S.; Polidori, Andrea; Koch, Michael; Fine, Philip M.; Mehadi, Ahmed; Hammond, Donald; Wright, Jeffery N.; Miguel, Antonio. H.; Ayala, Alberto; Zhu, Yifang
2013-04-01
This study compares the instrumental performance of three TSI water-based condensation particle counter (WCPC) models measuring particle number concentrations in close proximity (15 m) to a major freeway that has a significant level of heavy-duty diesel traffic. The study focuses on examining instrument biases and performance differences by different WCPC models under realistic field operational conditions. Three TSI models (3781, 3783, and 3785) were operated for one month in triplicate (nine units in total) in parallel with two sets of Scanning Mobility Particle Sizer (SMPS) spectrometers for the concurrent measurement of particle size distributions. Inter-model bias under different wind directions were first evaluated using 1-min raw data. Although all three WCPC models agreed well in upwind conditions (lower particle number concentrations, in the range of 103-104 particles cm-3), the three models' responses were significantly different under downwind conditions (higher particle number concentrations, above 104 particles cm-3). In an effort to increase inter-model linear correlations, we evaluated the results of using longer averaging time intervals. An averaging time of at least 15 min was found to achieve R2 values of 0.96 or higher when comparing all three models. Similar results were observed for intra-model comparisons for each of the three models. This strong linear relationship helped identify instrument bias related to particle number concentrations and particle size distributions. The TSI 3783 produced the highest particle counts, followed by TSI 3785, which reported 11% lower during downwind conditions than TSI 3783. TSI 3781 recorded particle number concentrations that were 24% lower than those observed by TSI 3783 during downwind condition. We found that TSI 3781 underestimated particles with a count median diameter less than 45 nm. Although the particle size dependency of instrument performance was found the most significant in TSI 3781, both models 3783 and 3785 showed somewhat size dependency. In addition, within each tested WCPC model, one unit was found to count significantly different and be more sensitive to particle size than the other two. Finally, exponential regression analysis was used to numerically quantify instrumental inter-model bias. Correction equations are proposed to adjust the TSI 3781 and 3785 data to the most recent model TSI 3783.
Explicit simulation of ice particle habits in a Numerical Weather Prediction Model
NASA Astrophysics Data System (ADS)
Hashino, Tempei
2007-05-01
This study developed a scheme for explicit simulation of ice particle habits in Numerical Weather Prediction (NWP) Models. The scheme is called Spectral Ice Habit Prediction System (SHIPS), and the goal is to retain growth history of ice particles in the Eulerian dynamics framework. It diagnoses characteristics of ice particles based on a series of particle property variables (PPVs) that reflect history of microphysieal processes and the transport between mass bins and air parcels in space. Therefore, categorization of ice particles typically used in bulk microphysical parameterization and traditional bin models is not necessary, so that errors that stem from the categorization can be avoided. SHIPS predicts polycrystals as well as hexagonal monocrystals based on empirically derived habit frequency and growth rate, and simulates the habit-dependent aggregation and riming processes by use of the stochastic collection equation with predicted PPVs. Idealized two dimensional simulations were performed with SHIPS in a NWP model. The predicted spatial distribution of ice particle habits and types, and evolution of particle size distributions showed good quantitative agreement with observation This comprehensive model of ice particle properties, distributions, and evolution in clouds can be used to better understand problems facing wide range of research disciplines, including microphysics processes, radiative transfer in a cloudy atmosphere, data assimilation, and weather modification.
Gyrokinetic modelling of the quasilinear particle flux for plasmas with neutral-beam fuelling
NASA Astrophysics Data System (ADS)
Narita, E.; Honda, M.; Nakata, M.; Yoshida, M.; Takenaga, H.; Hayashi, N.
2018-02-01
A quasilinear particle flux is modelled based on gyrokinetic calculations. The particle flux is estimated by determining factors, namely, coefficients of off-diagonal terms and a particle diffusivity. In this paper, the methodology to estimate the factors is presented using a subset of JT-60U plasmas. First, the coefficients of off-diagonal terms are estimated by linear gyrokinetic calculations. Next, to obtain the particle diffusivity, a semi-empirical approach is taken. Most experimental analyses for particle transport have assumed that turbulent particle fluxes are zero in the core region. On the other hand, even in the stationary state, the plasmas in question have a finite turbulent particle flux due to neutral-beam fuelling. By combining estimates of the experimental turbulent particle flux and the coefficients of off-diagonal terms calculated earlier, the particle diffusivity is obtained. The particle diffusivity should reflect a saturation amplitude of instabilities. The particle diffusivity is investigated in terms of the effects of the linear instability and linear zonal flow response, and it is found that a formula including these effects roughly reproduces the particle diffusivity. The developed framework for prediction of the particle flux is flexible to add terms neglected in the current model. The methodology to estimate the quasilinear particle flux requires so low computational cost that a database consisting of the resultant coefficients of off-diagonal terms and particle diffusivity can be constructed to train a neural network. The development of the methodology is the first step towards a neural-network-based particle transport model for fast prediction of the particle flux.
DEM code-based modeling of energy accumulation and release in structurally heterogeneous rock masses
NASA Astrophysics Data System (ADS)
Lavrikov, S. V.; Revuzhenko, A. F.
2015-10-01
Based on discrete element method, the authors model loading of a physical specimen to describe its capacity to accumulate and release elastic energy. The specimen is modeled as a packing of particles with viscoelastic coupling and friction. The external elastic boundary of the packing is represented by particles connected by elastic springs. The latter means introduction of an additional special potential of interaction between the boundary particles, that exercises effect even when there is no direct contact between the particles. On the whole, the model specimen represents an element of a medium capable of accumulation of deformation energy in the form of internal stresses. The data of the numerical modeling of the physical specimen compression and the laboratory testing results show good qualitative consistency.
Rengasamy, Samy; Eimer, Benjamin C
2012-01-01
National Institute for Occupational Safety and Health (NIOSH) certification test methods employ charge neutralized NaCl or dioctyl phthalate (DOP) aerosols to measure filter penetration levels of air-purifying particulate respirators photometrically using a TSI 8130 automated filter tester at 85 L/min. A previous study in our laboratory found that widely different filter penetration levels were measured for nanoparticles depending on whether a particle number (count)-based detector or a photometric detector was used. The purpose of this study was to better understand the influence of key test parameters, including filter media type, challenge aerosol size range, and detector system. Initial penetration levels for 17 models of NIOSH-approved N-, R-, and P-series filtering facepiece respirators were measured using the TSI 8130 photometric method and compared with the particle number-based penetration (obtained using two ultrafine condensation particle counters) for the same challenge aerosols generated by the TSI 8130. In general, the penetration obtained by the photometric method was less than the penetration obtained with the number-based method. Filter penetration was also measured for ambient room aerosols. Penetration measured by the TSI 8130 photometric method was lower than the number-based ambient aerosol penetration values. Number-based monodisperse NaCl aerosol penetration measurements showed that the most penetrating particle size was in the 50 nm range for all respirator models tested, with the exception of one model at ~200 nm size. Respirator models containing electrostatic filter media also showed lower penetration values with the TSI 8130 photometric method than the number-based penetration obtained for the most penetrating monodisperse particles. Results suggest that to provide a more challenging respirator filter test method than what is currently used for respirators containing electrostatic media, the test method should utilize a sufficient number of particles <100 nm and a count (particle number)-based detector.
NASA Astrophysics Data System (ADS)
Cui, Z.; Welty, C.; Maxwell, R. M.
2011-12-01
Lagrangian, particle-tracking models are commonly used to simulate solute advection and dispersion in aquifers. They are computationally efficient and suffer from much less numerical dispersion than grid-based techniques, especially in heterogeneous and advectively-dominated systems. Although particle-tracking models are capable of simulating geochemical reactions, these reactions are often simplified to first-order decay and/or linear, first-order kinetics. Nitrogen transport and transformation in aquifers involves both biodegradation and higher-order geochemical reactions. In order to take advantage of the particle-tracking approach, we have enhanced an existing particle-tracking code SLIM-FAST, to simulate nitrogen transport and transformation in aquifers. The approach we are taking is a hybrid one: the reactive multispecies transport process is operator split into two steps: (1) the physical movement of the particles including the attachment/detachment to solid surfaces, which is modeled by a Lagrangian random-walk algorithm; and (2) multispecies reactions including biodegradation are modeled by coupling multiple Monod equations with other geochemical reactions. The coupled reaction system is solved by an ordinary differential equation solver. In order to solve the coupled system of equations, after step 1, the particles are converted to grid-based concentrations based on the mass and position of the particles, and after step 2 the newly calculated concentration values are mapped back to particles. The enhanced particle-tracking code is capable of simulating subsurface nitrogen transport and transformation in a three-dimensional domain with variably saturated conditions. Potential application of the enhanced code is to simulate subsurface nitrogen loading to the Chesapeake Bay and its tributaries. Implementation details, verification results of the enhanced code with one-dimensional analytical solutions and other existing numerical models will be presented in addition to a discussion of implementation challenges.
Conceptual Change Texts in Chemistry Teaching: A Study on the Particle Model of Matter
ERIC Educational Resources Information Center
Beerenwinkel, Anne; Parchmann, Ilka; Grasel, Cornelia
2011-01-01
This study explores the effect of a conceptual change text on students' awareness of common misconceptions on the particle model of matter. The conceptual change text was designed based on principles of text comprehensibility, of conceptual change instruction and of instructional approaches how to introduce the particle model. It was evaluated in…
NASA Astrophysics Data System (ADS)
Veselovskii, I.; Dubovik, O.; Kolgotin, A.; Lapyonok, T.; di Girolamo, P.; Summa, D.; Whiteman, D. N.; Mishchenko, M.; Tanré, D.
2010-11-01
Multiwavelength (MW) Raman lidars have demonstrated their potential to profile particle parameters; however, until now, the physical models used in retrieval algorithms for processing MW lidar data have been predominantly based on the Mie theory. This approach is applicable to the modeling of light scattering by spherically symmetric particles only and does not adequately reproduce the scattering by generally nonspherical desert dust particles. Here we present an algorithm based on a model of randomly oriented spheroids for the inversion of multiwavelength lidar data. The aerosols are modeled as a mixture of two aerosol components: one composed only of spherical and the second composed of nonspherical particles. The nonspherical component is an ensemble of randomly oriented spheroids with size-independent shape distribution. This approach has been integrated into an algorithm retrieving aerosol properties from the observations with a Raman lidar based on a tripled Nd:YAG laser. Such a lidar provides three backscattering coefficients, two extinction coefficients, and the particle depolarization ratio at a single or multiple wavelengths. Simulations were performed for a bimodal particle size distribution typical of desert dust particles. The uncertainty of the retrieved particle surface, volume concentration, and effective radius for 10% measurement errors is estimated to be below 30%. We show that if the effect of particle nonsphericity is not accounted for, the errors in the retrieved aerosol parameters increase notably. The algorithm was tested with experimental data from a Saharan dust outbreak episode, measured with the BASIL multiwavelength Raman lidar in August 2007. The vertical profiles of particle parameters as well as the particle size distributions at different heights were retrieved. It was shown that the algorithm developed provided substantially reasonable results consistent with the available independent information about the observed aerosol event.
ERIC Educational Resources Information Center
Samarapungavan, Ala; Bryan, Lynn; Wills, Jamison
2017-01-01
In this paper, we present a study of second graders' learning about the nature of matter in the context of content-rich, model-based inquiry instruction. The goal of instruction was to help students learn to use simple particle models to explain states of matter and phase changes. We examined changes in students' ideas about matter, the coherence…
Diffusion rate limitations in actin-based propulsion of hard and deformable particles.
Dickinson, Richard B; Purich, Daniel L
2006-08-15
The mechanism by which actin polymerization propels intracellular vesicles and invasive microorganisms remains an open question. Several recent quantitative studies have examined propulsion of biomimetic particles such as polystyrene microspheres, phospholipid vesicles, and oil droplets. In addition to allowing quantitative measurement of parameters such as the dependence of particle speed on its size, these systems have also revealed characteristic behaviors such a saltatory motion of hard particles and oscillatory deformation of soft particles. Such measurements and observations provide tests for proposed mechanisms of actin-based motility. In the actoclampin filament end-tracking motor model, particle-surface-bound filament end-tracking proteins are involved in load-insensitive processive insertion of actin subunits onto elongating filament plus-ends that are persistently tethered to the surface. In contrast, the tethered-ratchet model assumes working filaments are untethered and the free-ended filaments grow as thermal ratchets in a load-sensitive manner. This article presents a model for the diffusion and consumption of actin monomers during actin-based particle propulsion to predict the monomer concentration field around motile particles. The results suggest that the various behaviors of biomimetic particles, including dynamic saltatory motion of hard particles and oscillatory vesicle deformations, can be quantitatively and self-consistently explained by load-insensitive, diffusion-limited elongation of (+)-end-tethered actin filaments, consistent with predictions of the actoclampin filament-end tracking mechanism.
Ciesielski, Peter N.; Crowley, Michael F.; Nimlos, Mark R.; ...
2014-12-09
Biomass exhibits a complex microstructure of directional pores that impact how heat and mass are transferred within biomass particles during conversion processes. However, models of biomass particles used in simulations of conversion processes typically employ oversimplified geometries such as spheres and cylinders and neglect intraparticle microstructure. In this study, we develop 3D models of biomass particles with size, morphology, and microstructure based on parameters obtained from quantitative image analysis. We obtain measurements of particle size and morphology by analyzing large ensembles of particles that result from typical size reduction methods, and we delineate several representative size classes. Microstructural parameters, includingmore » cell wall thickness and cell lumen dimensions, are measured directly from micrographs of sectioned biomass. A general constructive solid geometry algorithm is presented that produces models of biomass particles based on these measurements. Next, we employ the parameters obtained from image analysis to construct models of three different particle size classes from two different feedstocks representing a hardwood poplar species ( Populus tremuloides, quaking aspen) and a softwood pine ( Pinus taeda, loblolly pine). Finally, we demonstrate the utility of the models and the effects explicit microstructure by performing finite-element simulations of intraparticle heat and mass transfer, and the results are compared to similar simulations using traditional simplified geometries. In conclusion, we show how the behavior of particle models with more realistic morphology and explicit microstructure departs from that of spherical models in simulations of transport phenomena and that species-dependent differences in microstructure impact simulation results in some cases.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Boning; Herbold, Eric B.; Homel, Michael A.
2015-12-01
An adaptive particle fracture model in poly-ellipsoidal Discrete Element Method is developed. The poly-ellipsoidal particle will break into several sub-poly-ellipsoids by Hoek-Brown fracture criterion based on continuum stress and the maximum tensile stress in contacts. Also Weibull theory is introduced to consider the statistics and size effects on particle strength. Finally, high strain-rate split Hopkinson pressure bar experiment of silica sand is simulated using this newly developed model. Comparisons with experiments show that our particle fracture model can capture the mechanical behavior of this experiment very well, both in stress-strain response and particle size redistribution. The effects of density andmore » packings o the samples are also studied in numerical examples.« less
A Core-Particle Model for Periodically Focused Ion Beams with Intense Space-Charge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lund, S M; Barnard, J J; Bukh, B
2006-08-02
A core-particle model is derived to analyze transverse orbits of test particles evolving in the presence of a core ion beam described by the KV distribution. The core beam has uniform density within an elliptical cross-section and can be applied to model both quadrupole and solenoidal focused beams in periodic or aperiodic lattices. Efficient analytical descriptions of electrostatic space-charge fields external to the beam core are derived to simplify model equations. Image charge effects are analyzed for an elliptical beam centered in a round, conducting pipe to estimate model corrections resulting from image charge nonlinearities. Transformations are employed to removemore » coherent utter motion associated with oscillations of the ion beam core due to rapidly varying, linear applied focusing forces. Diagnostics for particle trajectories, Poincare phase-space projections, and single-particle emittances based on these transformations better illustrate the effects of nonlinear forces acting on particles evolving outside the core. A numerical code has been written based on this model. Example applications illustrate model characteristics. The core-particle model described has recently been applied to identify physical processes leading to space-charge transport limits for an rms matched beam in a periodic quadrupole focusing channel [Lund and Chawla, Nuc. Instr. and Meth. A 561, 203 (2006)]. Further characteristics of these processes are presented here.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlou, A. T.; Betzler, B. R.; Burke, T. P.
Uncertainties in the composition and fabrication of fuel compacts for the Fort St. Vrain (FSV) high temperature gas reactor have been studied by performing eigenvalue sensitivity studies that represent the key uncertainties for the FSV neutronic analysis. The uncertainties for the TRISO fuel kernels were addressed by developing a suite of models for an 'average' FSV fuel compact that models the fuel as (1) a mixture of two different TRISO fuel particles representing fissile and fertile kernels, (2) a mixture of four different TRISO fuel particles representing small and large fissile kernels and small and large fertile kernels and (3)more » a stochastic mixture of the four types of fuel particles where every kernel has its diameter sampled from a continuous probability density function. All of the discrete diameter and continuous diameter fuel models were constrained to have the same fuel loadings and packing fractions. For the non-stochastic discrete diameter cases, the MCNP compact model arranged the TRISO fuel particles on a hexagonal honeycomb lattice. This lattice-based fuel compact was compared to a stochastic compact where the locations (and kernel diameters for the continuous diameter cases) of the fuel particles were randomly sampled. Partial core configurations were modeled by stacking compacts into fuel columns containing graphite. The differences in eigenvalues between the lattice-based and stochastic models were small but the runtime of the lattice-based fuel model was roughly 20 times shorter than with the stochastic-based fuel model. (authors)« less
NASA Astrophysics Data System (ADS)
Zaichik, Leonid I.; Alipchenkov, Vladimir M.
2007-11-01
The purposes of the paper are threefold: (i) to refine the statistical model of preferential particle concentration in isotropic turbulence that was previously proposed by Zaichik and Alipchenkov [Phys. Fluids 15, 1776 (2003)], (ii) to investigate the effect of clustering of low-inertia particles using the refined model, and (iii) to advance a simple model for predicting the collision rate of aerosol particles. The model developed is based on a kinetic equation for the two-point probability density function of the relative velocity distribution of particle pairs. Improvements in predicting the preferential concentration of low-inertia particles are attained due to refining the description of the turbulent velocity field of the carrier fluid by including a difference between the time scales of the of strain and rotation rate correlations. The refined model results in a better agreement with direct numerical simulations for aerosol particles.
Spatial distribution of mineral dust single scattering albedo based on DREAM model
NASA Astrophysics Data System (ADS)
Kuzmanoski, Maja; Ničković, Slobodan; Ilić, Luka
2016-04-01
Mineral dust comprises a significant part of global aerosol burden. There is a large uncertainty in estimating role of dust in Earth's climate system, partly due to poor characterization of its optical properties. Single scattering albedo is one of key optical properties determining radiative effects of dust particles. While it depends on dust particle sizes, it is also strongly influenced by dust mineral composition, particularly the content of light-absorbing iron oxides and the mixing state (external or internal). However, an assumption of uniform dust composition is typically used in models. To better represent single scattering albedo in dust atmospheric models, required to increase accuracy of dust radiative effect estimates, it is necessary to include information on particle mineral content. In this study, we present the spatial distribution of dust single scattering albedo based on the Dust Regional Atmospheric Model (DREAM) with incorporated particle mineral composition. The domain of the model covers Northern Africa, Middle East and the European continent, with horizontal resolution set to 1/5°. It uses eight particle size bins within the 0.1-10 μm radius range. Focusing on dust episode of June 2010, we analyze dust single scattering albedo spatial distribution over the model domain, based on particle sizes and mineral composition from model output; we discuss changes in this optical property after long-range transport. Furthermore, we examine how the AERONET-derived aerosol properties respond to dust mineralogy. Finally we use AERONET data to evaluate model-based single scattering albedo. Acknowledgement We would like to thank the AERONET network and the principal investigators, as well as their staff, for establishing and maintaining the AERONET sites used in this work.
Rong, Guan; Liu, Guang; Zhou, Chuang-bing
2013-01-01
Since rocks are aggregates of mineral particles, the effect of mineral microstructure on macroscopic mechanical behaviors of rocks is inneglectable. Rock samples of four different particle shapes are established in this study based on clumped particle model, and a sphericity index is used to quantify particle shape. Model parameters for simulation in PFC are obtained by triaxial compression test of quartz sandstone, and simulation of triaxial compression test is then conducted on four rock samples with different particle shapes. It is seen from the results that stress thresholds of rock samples such as crack initiation stress, crack damage stress, and peak stress decrease with the increasing of the sphericity index. The increase of sphericity leads to a drop of elastic modulus and a rise in Poisson ratio, while the decreasing sphericity usually results in the increase of cohesion and internal friction angle. Based on volume change of rock samples during simulation of triaxial compression test, variation of dilation angle with plastic strain is also studied. PMID:23997677
Rong, Guan; Liu, Guang; Hou, Di; Zhou, Chuang-Bing
2013-01-01
Since rocks are aggregates of mineral particles, the effect of mineral microstructure on macroscopic mechanical behaviors of rocks is inneglectable. Rock samples of four different particle shapes are established in this study based on clumped particle model, and a sphericity index is used to quantify particle shape. Model parameters for simulation in PFC are obtained by triaxial compression test of quartz sandstone, and simulation of triaxial compression test is then conducted on four rock samples with different particle shapes. It is seen from the results that stress thresholds of rock samples such as crack initiation stress, crack damage stress, and peak stress decrease with the increasing of the sphericity index. The increase of sphericity leads to a drop of elastic modulus and a rise in Poisson ratio, while the decreasing sphericity usually results in the increase of cohesion and internal friction angle. Based on volume change of rock samples during simulation of triaxial compression test, variation of dilation angle with plastic strain is also studied.
Fast Simulation of Membrane Filtration by Combining Particle Retention Mechanisms and Network Models
NASA Astrophysics Data System (ADS)
Krupp, Armin; Griffiths, Ian; Please, Colin
2016-11-01
Porous membranes are used for their particle retention capabilities in a wide range of industrial filtration processes. The underlying mechanisms for particle retention are complex and often change during the filtration process, making it hard to predict the change in permeability of the membrane during the process. Recently, stochastic network models have been shown to predict the change in permeability based on retention mechanisms, but remain computationally intensive. We show that the averaged behaviour of such a stochastic network model can efficiently be computed using a simple partial differential equation. Moreover, we also show that the geometric structure of the underlying membrane and particle-size distribution can be represented in our model, making it suitable for modelling particle retention in interconnected membranes as well. We conclude by demonstrating the particular application to microfluidic filtration, where the model can be used to efficiently compute a probability density for flux measurements based on the geometry of the pores and particles. A. U. K. is grateful for funding from Pall Corporation and the Mathematical Institute, University of Oxford. I.M.G. gratefully acknowledges support from the Royal Society through a University Research Fellowship.
NASA Astrophysics Data System (ADS)
Nili, Samaun; Park, Chanyoung; Haftka, Raphael T.; Kim, Nam H.; Balachandar, S.
2017-11-01
Point particle methods are extensively used in simulating Euler-Lagrange multiphase dispersed flow. When particles are much smaller than the Eulerian grid the point particle model is on firm theoretical ground. However, this standard approach of evaluating the gas-particle coupling at the particle center fails to converge as the Eulerian grid is reduced below particle size. We present an approach to model the interaction between particles and fluid for finite size particles that permits convergence. We use the generalized Faxen form to compute the force on a particle and compare the results against traditional point particle method. We apportion the different force components on the particle to fluid cells based on the fraction of particle volume or surface in the cell. The application is to a one-dimensional model of shock propagation through a particle-laden field at moderate volume fraction, where the convergence is achieved for a well-formulated force model and back coupling for finite size particles. Comparison with 3D direct fully resolved numerical simulations will be used to check if the approach also improves accuracy compared to the point particle model. Work supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.
Modeling the C. elegans nematode and its environment using a particle system.
Rönkkö, Mauno; Wong, Garry
2008-07-21
A particle system, as understood in computer science, is a novel technique for modeling living organisms in their environment. Such particle systems have traditionally been used for modeling the complex dynamics of fluids and gases. In the present study, a particle system was devised to model the movement and feeding behavior of the nematode Caenorhabditis elegans in three different virtual environments: gel, liquid, and soil. The results demonstrate that distinct movements of the nematode can be attributed to its mechanical interactions with the virtual environment. These results also revealed emergent properties associated with modeling organisms within environment-based systems.
Relevance of near-Earth magnetic field modeling in deriving SEP properties using ground-based data
NASA Astrophysics Data System (ADS)
Kanellakopoulos, Anastasios; Plainaki, Christina; Mavromichalaki, Helen; Laurenza, Monica; Gerontidou, Maria; Storini, Marisa; Andriopoulou, Maria
2014-05-01
Ground Level Enhancements (GLEs) are short-term increases observed in cosmic ray intensity records of ground-based particle detectors such as neutron monitors (NMs) or muon detectors; they are related to the arrival of solar relativistic particles in the terrestrial environment. Hence, GLE events are related to the most energetic class of solar energetic particle (SEP) events. In this work we investigate how the use of different magnetospheric field models can influence the derivation of the relativistic SEP properties when modeling GLE events. As a case study, we examine the event of 2012 May 17 (also known as GLE71), registered by ground-based NMs. We apply the Tsyganenko 89 and the Tsyganenko 96 models in order to calculate the trajectories of the arriving SEPs in the near-Earth environment. We show that the intersection of the SEP trajectories with the atmospheric layer at ~20 km from the Earth's surface (i.e., where the flux of the generated secondary particles is maximum), forms for each ground-based neutron monitor a specified viewing region that is dependent on the magnetospheric field configuration. Then, we apply the Neutron Monitor Based Anisotropic GLE Pure Power Law (NMBANGLE PPOLA) model (Plainaki et al. 2010, Solar Phys, 264, 239), in order to derive the spectral properties of the related SEP event and the spatial distributions of the SEP fluxes impacting the Earth's atmosphere. We examine the dependence of the results on the used magnetic field models and evaluate their range of validity. Finally we discuss information derived by modeling the SEP spectrum in the frame of particle acceleration scenarios.
2015-01-01
An immersion Raman probe was used in emulsion copolymerization reactions to measure monomer concentrations and particle sizes. Quantitative determination of monomer concentrations is feasible in two-monomer copolymerizations, but only the overall conversion could be measured by Raman spectroscopy in a four-monomer copolymerization. The feasibility of measuring monomer conversion and particle size was established using partial least-squares (PLS) calibration models. A simplified theoretical framework for the measurement of particle sizes based on photon scattering is presented, based on the elastic-sphere-vibration and surface-tension models. PMID:26900256
Insights into DNA-mediated interparticle interactions from a coarse-grained model
NASA Astrophysics Data System (ADS)
Ding, Yajun; Mittal, Jeetain
2014-11-01
DNA-functionalized particles have great potential for the design of complex self-assembled materials. The major hurdle in realizing crystal structures from DNA-functionalized particles is expected to be kinetic barriers that trap the system in metastable amorphous states. Therefore, it is vital to explore the molecular details of particle assembly processes in order to understand the underlying mechanisms. Molecular simulations based on coarse-grained models can provide a convenient route to explore these details. Most of the currently available coarse-grained models of DNA-functionalized particles ignore key chemical and structural details of DNA behavior. These models therefore are limited in scope for studying experimental phenomena. In this paper, we present a new coarse-grained model of DNA-functionalized particles which incorporates some of the desired features of DNA behavior. The coarse-grained DNA model used here provides explicit DNA representation (at the nucleotide level) and complementary interactions between Watson-Crick base pairs, which lead to the formation of single-stranded hairpin and double-stranded DNA. Aggregation between multiple complementary strands is also prevented in our model. We study interactions between two DNA-functionalized particles as a function of DNA grafting density, lengths of the hybridizing and non-hybridizing parts of DNA, and temperature. The calculated free energies as a function of pair distance between particles qualitatively resemble experimental measurements of DNA-mediated pair interactions.
Modeling of Complex Coupled Fluid-Structure Interaction Systems in Arbitrary Water Depth
2008-01-01
model in a particle finite element method ( PFEM ) based framework for the ALE-RANS solver and submitted a journal paper recently [1]. In the paper, we...developing a fluid-flexible structure interaction model without free surface using ALE-RANS and k-ε turbulence closure model implemented by PFEM . In...the ALE_RANS and k-ε turbulence closure model based on the particle finite element Method ( PFEM ) and obtained some satisfying results [1-2]. The
Zhang, Peng; Gao, Chao; Zhang, Na; Slepian, Marvin J.; Deng, Yuefan; Bluestein, Danny
2014-01-01
We developed a multiscale particle-based model of platelets, to study the transport dynamics of shear stresses between the surrounding fluid and the platelet membrane. This model facilitates a more accurate prediction of the activation potential of platelets by viscous shear stresses - one of the major mechanisms leading to thrombus formation in cardiovascular diseases and in prosthetic cardiovascular devices. The interface of the model couples coarse-grained molecular dynamics (CGMD) with dissipative particle dynamics (DPD). The CGMD handles individual platelets while the DPD models the macroscopic transport of blood plasma in vessels. A hybrid force field is formulated for establishing a functional interface between the platelet membrane and the surrounding fluid, in which the microstructural changes of platelets may respond to the extracellular viscous shear stresses transferred to them. The interaction between the two systems preserves dynamic properties of the flowing platelets, such as the flipping motion. Using this multiscale particle-based approach, we have further studied the effects of the platelet elastic modulus by comparing the action of the flow-induced shear stresses on rigid and deformable platelet models. The results indicate that neglecting the platelet deformability may overestimate the stress on the platelet membrane, which in turn may lead to erroneous predictions of the platelet activation under viscous shear flow conditions. This particle-based fluid-structure interaction multiscale model offers for the first time a computationally feasible approach for simulating deformable platelets interacting with viscous blood flow, aimed at predicting flow induced platelet activation by using a highly resolved mapping of the stress distribution on the platelet membrane under dynamic flow conditions. PMID:25530818
Microscopic particle-rotor model for the low-lying spectrum of Λ hypernuclei
NASA Astrophysics Data System (ADS)
Mei, H.; Hagino, K.; Yao, J. M.; Motoba, T.
2014-12-01
We propose a novel method for low-lying states of hypernuclei based on the particle-rotor model, in which hypernuclear states are constructed by coupling the hyperon to low-lying states of the core nucleus. In contrast to the conventional particle-rotor model, we employ a microscopic approach for the core states; that is, the generator coordinate method (GCM) with the particle number and angular momentum projections. We apply this microscopic particle-rotor model to Λ9Be as an example employing a point-coupling version of the relativistic mean-field Lagrangian. A reasonable agreement with the experimental data for the low-spin spectrum is achieved using the Λ N coupling strengths determined to reproduce the binding energy of the Λ particle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, B; Georgia Institute of Technology, Atlanta, GA; Wang, C
Purpose: To correlate the damage produced by particles of different types and qualities to cell survival on the basis of nanodosimetric analysis and advanced DNA structures in the cell nucleus. Methods: A Monte Carlo code was developed to simulate subnuclear DNA chromatin fibers (CFs) of 30nm utilizing a mean-free-path approach common to radiation transport. The cell nucleus was modeled as a spherical region containing 6000 chromatin-dense domains (CDs) of 400nm diameter, with additional CFs modeled in a sparser interchromatin region. The Geant4-DNA code was utilized to produce a particle track database representing various particles at different energies and dose quantities.more » These tracks were used to stochastically position the DNA structures based on their mean free path to interaction with CFs. Excitation and ionization events intersecting CFs were analyzed using the DBSCAN clustering algorithm for assessment of the likelihood of producing DSBs. Simulated DSBs were then assessed based on their proximity to one another for a probability of inducing cell death. Results: Variations in energy deposition to chromatin fibers match expectations based on differences in particle track structure. The quality of damage to CFs based on different particle types indicate more severe damage by high-LET radiation than low-LET radiation of identical particles. In addition, the model indicates more severe damage by protons than of alpha particles of same LET, which is consistent with differences in their track structure. Cell survival curves have been produced showing the L-Q behavior of sparsely ionizing radiation. Conclusion: Initial results indicate the feasibility of producing cell survival curves based on the Monte Carlo cell nucleus method. Accurate correlation between simulated DNA damage to cell survival on the basis of nanodosimetric analysis can provide insight into the biological responses to various radiation types. Current efforts are directed at producing cell survival curves for high-LET radiation.« less
A multipopulation PSO based memetic algorithm for permutation flow shop scheduling.
Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang
2013-01-01
The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP.
Park, Sung Hee; Min, Sang-Gi; Jo, Yeon-Ji; Chun, Ji-Yeon
2015-01-01
In the dairy industry, natural plant-based powders are widely used to develop flavor and functionality. However, most of these ingredients are water-insoluble; therefore, emulsification is essential. In this study, the efficacy of high pressure homogenization (HPH) on natural plant (chocolate or vanilla)-based model emulsions was investigated. The particle size, electrical conductivity, Brix, pH, and color were analyzed after HPH. HPH significantly decreased the particle size of chocolate-based emulsions as a function of elevated pressures (20-100 MPa). HPH decreased the mean particle size of chocolate-based emulsions from 29.01 μm to 5.12 μm, and that of vanilla-based emulsions from 4.18 μm to 2.44 μm. Electrical conductivity increased as a function of the elevated pressures after HPH, for both chocolate- and vanilla-based model emulsions. HPH at 100 MPa increased the electrical conductivity of chocolate-based model emulsions from 0.570 S/m to 0.680 S/m, and that of vanilla-based model emulsions from 0.573 S/m to 0.601 S/m. Increased electrical conductivity would be attributed to colloidal phase modification and dispersion of oil globules. Brix of both chocolate- and vanilla-based model emulsions gradually increased as a function of the HPH pressure. Thus, HPH increased the solubility of plant-based powders by decreasing the particle size. This study demonstrated the potential use of HPH for enhancing the emulsification process and stability of the natural plant powders for applications with dairy products. PMID:26761891
NASA Astrophysics Data System (ADS)
Ali-Akbari, H. R.; Ceballes, S.; Abdelkefi, A.
2017-10-01
A nonlocal continuum-based model is derived to simulate the dynamic behavior of bridged carbon nanotube-based nano-scale mass detectors. The carbon nanotube (CNT) is modeled as an elastic Euler-Bernoulli beam considering von-Kármán type geometric nonlinearity. In order to achieve better accuracy in characterization of the CNTs, the geometrical properties of an attached nano-scale particle are introduced into the model by its moment of inertia with respect to the central axis of the beam. The inter-atomic long-range interactions within the structure of the CNT are incorporated into the model using Eringen's nonlocal elastic field theory. In this model, the mass can be deposited along an arbitrary length of the CNT. After deriving the full nonlinear equations of motion, the natural frequencies and corresponding mode shapes are extracted based on a linear eigenvalue problem analysis. The results show that the geometry of the attached particle has a significant impact on the dynamic behavior of the CNT-based mechanical resonator, especially, for those with small aspect ratios. The developed model and analysis are beneficial for nano-scale mass identification when a CNT-based mechanical resonator is utilized as a small-scale bio-mass sensor and the deposited particles are those, such as proteins, enzymes, cancer cells, DNA and other nano-scale biological objects with different and complex shapes.
A new hybrid particle/fluid model for cometary dust
NASA Astrophysics Data System (ADS)
Shou, Y.; Combi, M. R.; Tenishev, V.; Toth, G.; Hansen, K. C.; Huang, Z.; Gombosi, T. I.; Fougere, N.; Rubin, M.
2017-12-01
Cometary dust grains, which originate from comets, are believed to contain clues to the formation and the evolution of comets. They also play an important role in shaping the cometary environment, as they are able to decelerate and heat the gas through collisions, carry charges and interact with the plasma environment, and possibly sublimate gases. Therefore, the loss rate and behavior of dust grains are of interest to scientists. Currently, mainly two types of numerical dust models exist: particle models and fluid models have been developed. Particle models, which keep track of the positions and velocities of all gas and dust particles, allow crossing dust trajectories and a more accurate description of returning dust grains than the fluid model. However, in order to compute the gas drag force, the particle model needs to follow more gas particles than dust particles. A fluid model is usually more computationally efficient and is often used to provide simulations on larger spatial and temporal scales. In this work, a new hybrid model is developed to combine the advantages of both particle and fluid models. In the new approach a fluid model based on the University of Michigan BATSRUS code computes the gas properties, and feeds the gas drag force to the particle model, which is based on the Adaptive Mesh Particle Simulator (AMPS) code, to calculate the motion of dust grains. The coupling is done via the Space Weather Modeling Framework (SWMF). In addition to the capability of simulating the long-term dust phenomena, the model can also designate small active regions on the nucleus for comparison with the temporary fine dust features in observations. With the assistance of the newly developed model, the effect of viewing angles on observed dust jet shapes and the transportation of heavy dust grains from the southern to the northern hemisphere of comet 67P/Churyumov-Gerasimenko will be studied and compared with Rosetta mission images. Preliminary results will be presented. Support from contracts JPL #1266314 and #1266313 from the US Rosetta Project and grant NNX14AG84G from the NASA Planetary Atmospheres Program are gratefully acknowledged.
Atmospheric fate and transport of fine volcanic ash: Does particle shape matter?
NASA Astrophysics Data System (ADS)
White, C. M.; Allard, M. P.; Klewicki, J.; Proussevitch, A. A.; Mulukutla, G.; Genareau, K.; Sahagian, D. L.
2013-12-01
Volcanic ash presents hazards to infrastructure, agriculture, and human and animal health. In particular, given the economic importance of intercontinental aviation, understanding how long ash is suspended in the atmosphere, and how far it is transported has taken on greater importance. Airborne ash abrades the exteriors of aircraft, enters modern jet engines and melts while coating interior engine parts causing damage and potential failure. The time fine ash stays in the atmosphere depends on its terminal velocity. Existing models of ash terminal velocities are based on smooth, quasi-spherical particles characterized by Stokes velocity. Ash particles, however, violate the various assumptions upon which Stokes flow and associated models are based. Ash particles are non-spherical and can have complex surface and internal structure. This suggests that particle shape may be one reason that models fail to accurately predict removal rates of fine particles from volcanic ash clouds. The present research seeks to better parameterize predictive models for ash particle terminal velocities, diffusivity, and dispersion in the atmospheric boundary layer. The fundamental hypothesis being tested is that particle shape irreducibly impacts the fate and transport properties of fine volcanic ash. Pilot studies, incorporating modeling and experiments, are being conducted to test this hypothesis. Specifically, a statistical model has been developed that can account for actual volcanic ash size distributions, complex ash particle geometry, and geometry variability. Experimental results are used to systematically validate and improve the model. The experiments are being conducted at the Flow Physics Facility (FPF) at UNH. Terminal velocities and dispersion properties of fine ash are characterized using still air drop experiments in an unconstrained open space using a homogenized mix of source particles. Dispersion and sedimentation dynamics are quantified using particle image velocimetry (PIV). Scanning Electron Microscopy (SEM) of ash particles collected in localized deposition areas is used to correlate the PIV results to particle shape. In addition, controlled wind tunnel experiments are used to determine particle fate and transport in a turbulent boundary layer for a mixed particle population. Collectively, these studies will provide an improved understanding of the effects of particle shape on sedimentation and dispersion, and foundational data for the predictive modeling of the fate and transport of fine ash particles suspended in the atmosphere.
PLUME-MoM 1.0: A new integral model of volcanic plumes based on the method of moments
NASA Astrophysics Data System (ADS)
de'Michieli Vitturi, M.; Neri, A.; Barsotti, S.
2015-08-01
In this paper a new integral mathematical model for volcanic plumes, named PLUME-MoM, is presented. The model describes the steady-state dynamics of a plume in a 3-D coordinate system, accounting for continuous variability in particle size distribution of the pyroclastic mixture ejected at the vent. Volcanic plumes are composed of pyroclastic particles of many different sizes ranging from a few microns up to several centimeters and more. A proper description of such a multi-particle nature is crucial when quantifying changes in grain-size distribution along the plume and, therefore, for better characterization of source conditions of ash dispersal models. The new model is based on the method of moments, which allows for a description of the pyroclastic mixture dynamics not only in the spatial domain but also in the space of parameters of the continuous size distribution of the particles. This is achieved by formulation of fundamental transport equations for the multi-particle mixture with respect to the different moments of the grain-size distribution. Different formulations, in terms of the distribution of the particle number, as well as of the mass distribution expressed in terms of the Krumbein log scale, are also derived. Comparison between the new moments-based formulation and the classical approach, based on the discretization of the mixture in N discrete phases, shows that the new model allows for the same results to be obtained with a significantly lower computational cost (particularly when a large number of discrete phases is adopted). Application of the new model, coupled with uncertainty quantification and global sensitivity analyses, enables the investigation of the response of four key output variables (mean and standard deviation of the grain-size distribution at the top of the plume, plume height and amount of mass lost by the plume during the ascent) to changes in the main input parameters (mean and standard deviation) characterizing the pyroclastic mixture at the base of the plume. Results show that, for the range of parameters investigated and without considering interparticle processes such as aggregation or comminution, the grain-size distribution at the top of the plume is remarkably similar to that at the base and that the plume height is only weakly affected by the parameters of the grain distribution. The adopted approach can be potentially extended to the consideration of key particle-particle effects occurring in the plume including particle aggregation and fragmentation.
ERIC Educational Resources Information Center
Hirsh, Alon; Levy, Sharona T.
2013-01-01
The present research addresses a curious finding: how learning physical principles enhanced athletes' biking performance but not their conceptual understanding. The study involves a model-based triathlon training program, Biking with Particles, concerning aerodynamics of biking in groups (drafting). A conceptual framework highlights several…
SPARSE—A subgrid particle averaged Reynolds stress equivalent model: testing with a priori closure
Davis, Sean L.; Sen, Oishik; Udaykumar, H. S.
2017-01-01
A Lagrangian particle cloud model is proposed that accounts for the effects of Reynolds-averaged particle and turbulent stresses and the averaged carrier-phase velocity of the subparticle cloud scale on the averaged motion and velocity of the cloud. The SPARSE (subgrid particle averaged Reynolds stress equivalent) model is based on a combination of a truncated Taylor expansion of a drag correction function and Reynolds averaging. It reduces the required number of computational parcels to trace a cloud of particles in Eulerian–Lagrangian methods for the simulation of particle-laden flow. Closure is performed in an a priori manner using a reference simulation where all particles in the cloud are traced individually with a point-particle model. Comparison of a first-order model and SPARSE with the reference simulation in one dimension shows that both the stress and the averaging of the carrier-phase velocity on the cloud subscale affect the averaged motion of the particle. A three-dimensional isotropic turbulence computation shows that only one computational parcel is sufficient to accurately trace a cloud of tens of thousands of particles. PMID:28413341
SPARSE-A subgrid particle averaged Reynolds stress equivalent model: testing with a priori closure.
Davis, Sean L; Jacobs, Gustaaf B; Sen, Oishik; Udaykumar, H S
2017-03-01
A Lagrangian particle cloud model is proposed that accounts for the effects of Reynolds-averaged particle and turbulent stresses and the averaged carrier-phase velocity of the subparticle cloud scale on the averaged motion and velocity of the cloud. The SPARSE (subgrid particle averaged Reynolds stress equivalent) model is based on a combination of a truncated Taylor expansion of a drag correction function and Reynolds averaging. It reduces the required number of computational parcels to trace a cloud of particles in Eulerian-Lagrangian methods for the simulation of particle-laden flow. Closure is performed in an a priori manner using a reference simulation where all particles in the cloud are traced individually with a point-particle model. Comparison of a first-order model and SPARSE with the reference simulation in one dimension shows that both the stress and the averaging of the carrier-phase velocity on the cloud subscale affect the averaged motion of the particle. A three-dimensional isotropic turbulence computation shows that only one computational parcel is sufficient to accurately trace a cloud of tens of thousands of particles.
Zhan, Xiaobin; Jiang, Shulan; Yang, Yili; Liang, Jian; Shi, Tielin; Li, Xiwen
2015-09-18
This paper proposes an ultrasonic measurement system based on least squares support vector machines (LS-SVM) for inline measurement of particle concentrations in multicomponent suspensions. Firstly, the ultrasonic signals are analyzed and processed, and the optimal feature subset that contributes to the best model performance is selected based on the importance of features. Secondly, the LS-SVM model is tuned, trained and tested with different feature subsets to obtain the optimal model. In addition, a comparison is made between the partial least square (PLS) model and the LS-SVM model. Finally, the optimal LS-SVM model with the optimal feature subset is applied to inline measurement of particle concentrations in the mixing process. The results show that the proposed method is reliable and accurate for inline measuring the particle concentrations in multicomponent suspensions and the measurement accuracy is sufficiently high for industrial application. Furthermore, the proposed method is applicable to the modeling of the nonlinear system dynamically and provides a feasible way to monitor industrial processes.
A variational multiscale method for particle-cloud tracking in turbomachinery flows
NASA Astrophysics Data System (ADS)
Corsini, A.; Rispoli, F.; Sheard, A. G.; Takizawa, K.; Tezduyar, T. E.; Venturini, P.
2014-11-01
We present a computational method for simulation of particle-laden flows in turbomachinery. The method is based on a stabilized finite element fluid mechanics formulation and a finite element particle-cloud tracking method. We focus on induced-draft fans used in process industries to extract exhaust gases in the form of a two-phase fluid with a dispersed solid phase. The particle-laden flow causes material wear on the fan blades, degrading their aerodynamic performance, and therefore accurate simulation of the flow would be essential in reliable computational turbomachinery analysis and design. The turbulent-flow nature of the problem is dealt with a Reynolds-Averaged Navier-Stokes model and Streamline-Upwind/Petrov-Galerkin/Pressure-Stabilizing/Petrov-Galerkin stabilization, the particle-cloud trajectories are calculated based on the flow field and closure models for the turbulence-particle interaction, and one-way dependence is assumed between the flow field and particle dynamics. We propose a closure model utilizing the scale separation feature of the variational multiscale method, and compare that to the closure utilizing the eddy viscosity model. We present computations for axial- and centrifugal-fan configurations, and compare the computed data to those obtained from experiments, analytical approaches, and other computational methods.
Rotating states of self-propelling particles in two dimensions.
Chen, Hsuan-Yi; Leung, Kwan-Tai
2006-05-01
We present particle-based simulations and a continuum theory for steady rotating flocks formed by self-propelling particles (SPPs) in two-dimensional space. Our models include realistic but simple rules for the self-propelling, drag, and interparticle interactions. Among other coherent structures, in particle-based simulations we find steady rotating flocks when the velocity of the particles lacks long-range alignment. Physical characteristics of the rotating flock are measured and discussed. We construct a phenomenological continuum model and seek steady-state solutions for a rotating flock. We show that the velocity and density profiles become simple in two limits. In the limit of weak alignment, we find that all particles move with the same speed and the density of particles vanishes near the center of the flock due to the divergence of centripetal force. In the limit of strong body force, the density of particles within the flock is uniform and the velocity of the particles close to the center of the flock becomes small.
Fish tracking by combining motion based segmentation and particle filtering
NASA Astrophysics Data System (ADS)
Bichot, E.; Mascarilla, L.; Courtellemont, P.
2006-01-01
In this paper, we suggest a new importance sampling scheme to improve a particle filtering based tracking process. This scheme relies on exploitation of motion segmentation. More precisely, we propagate hypotheses from particle filtering to blobs of similar motion to target. Hence, search is driven toward regions of interest in the state space and prediction is more accurate. We also propose to exploit segmentation to update target model. Once the moving target has been identified, a representative model is learnt from its spatial support. We refer to this model in the correction step of the tracking process. The importance sampling scheme and the strategy to update target model improve the performance of particle filtering in complex situations of occlusions compared to a simple Bootstrap approach as shown by our experiments on real fish tank sequences.
NASA Astrophysics Data System (ADS)
Igarashi, Akito; Tsukamoto, Shinji
2000-02-01
Biological molecular motors drive unidirectional transport and transduce chemical energy to mechanical work. In order to identify this energy conversion which is a common feature of molecular motors, many workers have studied various physical models, which consist of Brownian particles in spatially periodic potentials. Most of the models are, however, based on "single-particle" dynamics and too simple as models for biological motors, especially for actin-myosin motors, which cause muscle contraction. In this paper, particles coupled by elastic strings in an asymmetric periodic potential are considered as a model for the motors. We investigate the dynamics of the model and calculate the efficiency of energy conversion with the use of molecular dynamical method. In particular, we find that the velocity and efficiency of the elastically coupled particles where the natural length of the springs is incommensurable with the period of the periodic potential are larger than those of the corresponding single particle model.
Letelier, Ricardo M.; Whitmire, Amanda L.; Barone, Benedetto; Bidigare, Robert R.; Church, Matthew J.; Karl, David M.
2015-01-01
Abstract The particle size distribution (PSD) is a critical aspect of the oceanic ecosystem. Local variability in the PSD can be indicative of shifts in microbial community structure and reveal patterns in cell growth and loss. The PSD also plays a central role in particle export by influencing settling speed. Satellite‐based models of primary productivity (PP) often rely on aspects of photophysiology that are directly related to community size structure. In an effort to better understand how variability in particle size relates to PP in an oligotrophic ecosystem, we collected laser diffraction‐based depth profiles of the PSD and pigment‐based classifications of phytoplankton functional types (PFTs) on an approximately monthly basis at the Hawaii Ocean Time‐series Station ALOHA, in the North Pacific subtropical gyre. We found a relatively stable PSD in the upper water column. However, clear seasonality is apparent in the vertical distribution of distinct particle size classes. Neither laser diffraction‐based estimations of relative particle size nor pigment‐based PFTs was found to be significantly related to the rate of 14C‐based PP in the light‐saturated upper euphotic zone. This finding indicates that satellite retrievals of particle size, based on particle scattering or ocean color would not improve parameterizations of present‐day bio‐optical PP models for this region. However, at depths of 100–125 m where irradiance exerts strong control on PP, we do observe a significant linear relationship between PP and the estimated carbon content of 2–20 μm particles. PMID:27812434
White, Angelicque E; Letelier, Ricardo M; Whitmire, Amanda L; Barone, Benedetto; Bidigare, Robert R; Church, Matthew J; Karl, David M
2015-11-01
The particle size distribution (PSD) is a critical aspect of the oceanic ecosystem. Local variability in the PSD can be indicative of shifts in microbial community structure and reveal patterns in cell growth and loss. The PSD also plays a central role in particle export by influencing settling speed. Satellite-based models of primary productivity (PP) often rely on aspects of photophysiology that are directly related to community size structure. In an effort to better understand how variability in particle size relates to PP in an oligotrophic ecosystem, we collected laser diffraction-based depth profiles of the PSD and pigment-based classifications of phytoplankton functional types (PFTs) on an approximately monthly basis at the Hawaii Ocean Time-series Station ALOHA, in the North Pacific subtropical gyre. We found a relatively stable PSD in the upper water column. However, clear seasonality is apparent in the vertical distribution of distinct particle size classes. Neither laser diffraction-based estimations of relative particle size nor pigment-based PFTs was found to be significantly related to the rate of 14 C-based PP in the light-saturated upper euphotic zone. This finding indicates that satellite retrievals of particle size, based on particle scattering or ocean color would not improve parameterizations of present-day bio-optical PP models for this region. However, at depths of 100-125 m where irradiance exerts strong control on PP, we do observe a significant linear relationship between PP and the estimated carbon content of 2-20 μm particles.
Caccamo, M; Ferguson, J D; Veerkamp, R F; Schadt, I; Petriglieri, R; Azzaro, G; Pozzebon, A; Licitra, G
2014-01-01
As part of a larger project aiming to develop management evaluation tools based on results from test-day (TD) models, the objective of this study was to examine the effect of physical composition of total mixed rations (TMR) tested quarterly from March 2006 through December 2008 on milk, fat, and protein yield curves for 25 herds in Ragusa, Sicily. A random regression sire-maternal grandsire model was used to estimate variance components for milk, fat, and protein yields fitted on a full data set, including 241,153 TD records from 9,809 animals in 42 herds recorded from 1995 through 2008. The model included parity, age at calving, year at calving, and stage of pregnancy as fixed effects. Random effects were herd × test date, sire and maternal grandsire additive genetic effect, and permanent environmental effect modeled using third-order Legendre polynomials. Model fitting was carried out using ASREML. Afterward, for the 25 herds involved in the study, 9 particle size classes were defined based on the proportions of TMR particles on the top (19-mm) and middle (8-mm) screen of the Penn State Particle Separator. Subsequently, the model with estimated variance components was used to examine the influence of TMR particle size class on milk, fat, and protein yield curves. An interaction was included with the particle size class and days in milk. The effect of the TMR particle size class was modeled using a ninth-order Legendre polynomial. Lactation curves were predicted from the model while controlling for TMR chemical composition (crude protein content of 15.5%, neutral detergent fiber of 40.7%, and starch of 19.7% for all classes), to have pure estimates of particle distribution not confounded by nutrient content of TMR. We found little effect of class of particle proportions on milk yield and fat yield curves. Protein yield was greater for sieve classes with 10.4 to 17.4% of TMR particles retained on the top (19-mm) sieve. Optimal distributions different from those recommended may reflect regional differences based on climate and types and quality of forages fed. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Statistically Based Morphodynamic Modeling of Tracer Slowdown
NASA Astrophysics Data System (ADS)
Borhani, S.; Ghasemi, A.; Hill, K. M.; Viparelli, E.
2017-12-01
Tracer particles are used to study bedload transport in gravel-bed rivers. One of the advantages associated with using of tracer particles is that they allow for direct measures of the entrainment rates and their size distributions. The main issue in large scale studies with tracer particles is the difference between tracer stone short term and long term behavior. This difference is due to the fact that particles undergo vertical mixing or move to less active locations such as bars or even floodplains. For these reasons the average virtual velocity of tracer particle decreases in time, i.e. the tracer slowdown. In summary, tracer slowdown can have a significant impact on the estimation of bedload transport rate or long term dispersal of contaminated sediment. The vast majority of the morphodynamic models that account for the non-uniformity of the bed material (tracer and not tracer, in this case) are based on a discrete description of the alluvial deposit. The deposit is divided in two different regions; the active layer and the substrate. The active layer is a thin layer in the topmost part of the deposit whose particles can interact with the bed material transport. The substrate is the part of the deposit below the active layer. Due to the discrete representation of the alluvial deposit, active layer models are not able to reproduce tracer slowdown. In this study we try to model the slowdown of tracer particles with the continuous Parker-Paola-Leclair morphodynamic framework. This continuous, i.e. not layer-based, framework is based on a stochastic description of the temporal variation of bed surface elevation, and of the elevation specific particle entrainment and deposition. Particle entrainment rates are computed as a function of the flow and sediment characteristics, while particle deposition is estimated with a step length formulation. Here we present one of the first implementation of the continuum framework at laboratory scale, its validation against laboratory data and then we attempt to use the validated model to describe the tracer long-term slowdown.
NASA Astrophysics Data System (ADS)
Harrington, J. Y.
2017-12-01
Parameterizing the growth of ice particles in numerical models is at an interesting cross-roads. Most parameterizations developed in the past, including some that I have developed, parse model ice into numerous categories based primarily on the growth mode of the particle. Models routinely possess smaller ice, snow crystals, aggregates, graupel, and hail. The snow and ice categories in some models are further split into subcategories to account for the various shapes of ice. There has been a relatively recent shift towards a new class of microphysical models that predict the properties of ice particles instead of using multiple categories and subcategories. Particle property models predict the physical characteristics of ice, such as aspect ratio, maximum dimension, effective density, rime density, effective area, and so forth. These models are attractive in the sense that particle characteristics evolve naturally in time and space without the need for numerous (and somewhat artificial) transitions among pre-defined classes. However, particle property models often require fundamental parameters that are typically derived from laboratory measurements. For instance, the evolution of particle shape during vapor depositional growth requires knowledge of the growth efficiencies for the various axis of the crystals, which in turn depends on surface parameters that can only be determined in the laboratory. The evolution of particle shapes and density during riming, aggregation, and melting require data on the redistribution of mass across a crystals axis as that crystal collects water drops, ice crystals, or melts. Predicting the evolution of particle properties based on laboratory-determined parameters has a substantial influence on the evolution of some cloud systems. Radiatively-driven cirrus clouds show a broader range of competition between heterogeneous nucleation and homogeneous freezing when ice crystal properties are predicted. Even strongly convective squall lines show a substantial influence to predicted particle properties: The more natural evolution of ice crystals during riming produces graupel-like particles with size and fall-speeds required for the formation of a classic transition zone and extended stratiform precipitation region.
NASA Astrophysics Data System (ADS)
Jia, L. Y.
2016-06-01
The particle-hole symmetry (equivalence) of the full shell-model Hilbert space is straightforward and routinely used in practical calculations. In this work I show that this symmetry is preserved in the subspace truncated up to a certain generalized seniority and give the explicit transformation between the states in the two types (particle and hole) of representations. Based on the results, I study particle-hole symmetry in popular theories that could be regarded as further truncations on top of the generalized seniority, including the microscopic interacting boson (fermion) model, the nucleon-pair approximation, and other models.
Asymptotic stability of spectral-based PDF modeling for homogeneous turbulent flows
NASA Astrophysics Data System (ADS)
Campos, Alejandro; Duraisamy, Karthik; Iaccarino, Gianluca
2015-11-01
Engineering models of turbulence, based on one-point statistics, neglect spectral information inherent in a turbulence field. It is well known, however, that the evolution of turbulence is dictated by a complex interplay between the spectral modes of velocity. For example, for homogeneous turbulence, the pressure-rate-of-strain depends on the integrated energy spectrum weighted by components of the wave vectors. The Interacting Particle Representation Model (IPRM) (Kassinos & Reynolds, 1996) and the Velocity/Wave-Vector PDF model (Van Slooten & Pope, 1997) emulate spectral information in an attempt to improve the modeling of turbulence. We investigate the evolution and asymptotic stability of the IPRM using three different approaches. The first approach considers the Lagrangian evolution of individual realizations (idealized as particles) of the stochastic process defined by the IPRM. The second solves Lagrangian evolution equations for clusters of realizations conditional on a given wave vector. The third evolves the solution of the Eulerian conditional PDF corresponding to the aforementioned clusters. This last method avoids issues related to discrete particle noise and slow convergence associated with Lagrangian particle-based simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gokaltun, Seckin; Munroe, Norman; Subramaniam, Shankar
2014-12-31
This study presents a new drag model, based on the cohesive inter-particle forces, implemented in the MFIX code. This new drag model combines an existing standard model in MFIX with a particle-based drag model based on a switching principle. Switches between the models in the computational domain occur where strong particle-to-particle cohesion potential is detected. Three versions of the new model were obtained by using one standard drag model in each version. Later, performance of each version was compared against available experimental data for a fluidized bed, published in the literature and used extensively by other researchers for validation purposes.more » In our analysis of the results, we first observed that standard models used in this research were incapable of producing closely matching results. Then, we showed for a simple case that a threshold is needed to be set on the solid volume fraction. This modification was applied to avoid non-physical results for the clustering predictions, when governing equation of the solid granular temperate was solved. Later, we used our hybrid technique and observed the capability of our approach in improving the numerical results significantly; however, improvement of the results depended on the threshold of the cohesive index, which was used in the switching procedure. Our results showed that small values of the threshold for the cohesive index could result in significant reduction of the computational error for all the versions of the proposed drag model. In addition, we redesigned an existing circulating fluidized bed (CFB) test facility in order to create validation cases for clustering regime of Geldart A type particles.« less
The Aerosol Models in MODTRAN: Incorporating Selected Measurements From Northern Australia
2005-12-01
biomass burning smoke aerosol is modelled assuming the particles are spherical and Mie scattering theory is used to calculate the extinction and...and therefore internally mixed aerosol particles are hygroscopic . Shettle and Fenn model the growth in the size of aerosol particles and changes in...by Sutherland and Khanna [21] was to obtain measurements of the optical properties of organic -based aerosols produced by burning vegetation.
Probabilistic learning of nonlinear dynamical systems using sequential Monte Carlo
NASA Astrophysics Data System (ADS)
Schön, Thomas B.; Svensson, Andreas; Murray, Lawrence; Lindsten, Fredrik
2018-05-01
Probabilistic modeling provides the capability to represent and manipulate uncertainty in data, models, predictions and decisions. We are concerned with the problem of learning probabilistic models of dynamical systems from measured data. Specifically, we consider learning of probabilistic nonlinear state-space models. There is no closed-form solution available for this problem, implying that we are forced to use approximations. In this tutorial we will provide a self-contained introduction to one of the state-of-the-art methods-the particle Metropolis-Hastings algorithm-which has proven to offer a practical approximation. This is a Monte Carlo based method, where the particle filter is used to guide a Markov chain Monte Carlo method through the parameter space. One of the key merits of the particle Metropolis-Hastings algorithm is that it is guaranteed to converge to the "true solution" under mild assumptions, despite being based on a particle filter with only a finite number of particles. We will also provide a motivating numerical example illustrating the method using a modeling language tailored for sequential Monte Carlo methods. The intention of modeling languages of this kind is to open up the power of sophisticated Monte Carlo methods-including particle Metropolis-Hastings-to a large group of users without requiring them to know all the underlying mathematical details.
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-09-03
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.
A New Self-Consistent Field Model of Polymer/Nanoparticle Mixture
NASA Astrophysics Data System (ADS)
Chen, Kang; Li, Hui-Shu; Zhang, Bo-Kai; Li, Jian; Tian, Wen-De
2016-02-01
Field-theoretical method is efficient in predicting assembling structures of polymeric systems. However, it’s challenging to generalize this method to study the polymer/nanoparticle mixture due to its multi-scale nature. Here, we develop a new field-based model which unifies the nanoparticle description with the polymer field within the self-consistent field theory. Instead of being “ensemble-averaged” continuous distribution, the particle density in the final morphology can represent individual particles located at preferred positions. The discreteness of particle density allows our model to properly address the polymer-particle interface and the excluded-volume interaction. We use this model to study the simplest system of nanoparticles immersed in the dense homopolymer solution. The flexibility of tuning the interfacial details allows our model to capture the rich phenomena such as bridging aggregation and depletion attraction. Insights are obtained on the enthalpic and/or entropic origin of the structural variation due to the competition between depletion and interfacial interaction. This approach is readily extendable to the study of more complex polymer-based nanocomposites or biology-related systems, such as dendrimer/drug encapsulation and membrane/particle assembly.
NASA Technical Reports Server (NTRS)
Olson, William S.; Bauer, Peter; Viltard, Nicolas F.; Johnson, Daniel E.; Tao, Wei-Kuo
2000-01-01
In this study, a 1-D steady-state microphysical model which describes the vertical distribution of melting precipitation particles is developed. The model is driven by the ice-phase precipitation distributions just above the freezing level at applicable gridpoints of "parent" 3-D cloud-resolving model (CRM) simulations. It extends these simulations by providing the number density and meltwater fraction of each particle in finely separated size categories through the melting layer. The depth of the modeled melting layer is primarily determined by the initial material density of the ice-phase precipitation. The radiative properties of melting precipitation at microwave frequencies are calculated based upon different methods for describing the dielectric properties of mixed phase particles. Particle absorption and scattering efficiencies at the Tropical Rainfall Measuring Mission Microwave Imager frequencies (10.65 to 85.5 GHz) are enhanced greatly for relatively small (approx. 0.1) meltwater fractions. The relatively large number of partially-melted particles just below the freezing level in stratiform regions leads to significant microwave absorption, well-exceeding the absorption by rain at the base of the melting layer. Calculated precipitation backscatter efficiencies at the Precipitation Radar frequency (13.8 GHz) increase in proportion to the particle meltwater fraction, leading to a "bright-band" of enhanced radar reflectivities in agreement with previous studies. The radiative properties of the melting layer are determined by the choice of dielectric models and the initial water contents and material densities of the "seeding" ice-phase precipitation particles. Simulated melting layer profiles based upon snow described by the Fabry-Szyrmer core-shell dielectric model and graupel described by the Maxwell-Garnett water matrix dielectric model lead to reasonable agreement with radar-derived melting layer optical depth distributions. Moreover, control profiles that do not contain mixed-phase precipitation particles yield optical depths that are systematically lower than those observed. Therefore, the use of the melting layer model to extend 3-D CRM simulations appears justified, at least until more realistic spectral methods for describing melting precipitation in high-resolution, 3-D CRM's are implemented.
Iwasaki, Toshiki; Nelson, Jonathan M.; Shimizu, Yasuyuki; Parker, Gary
2017-01-01
Asymptotic characteristics of the transport of bed load tracer particles in rivers have been described by advection-dispersion equations. Here we perform numerical simulations designed to study the role of free bars, and more specifically single-row alternate bars, on streamwise tracer particle dispersion. In treating the conservation of tracer particle mass, we use two alternative formulations for the Exner equation of sediment mass conservation: the flux-based formulation, in which bed elevation varies with the divergence of the bed load transport rate, and the entrainment-based formulation, in which bed elevation changes with the net deposition rate. Under the condition of no net bed aggradation/degradation, a 1-D flux-based deterministic model that does not describe free bars yields no streamwise dispersion. The entrainment-based 1-D formulation, on the other hand, models stochasticity via the probability density function (PDF) of particle step length, and as a result does show tracer dispersion. When the formulation is generalized to 2-D to include free alternate bars, however, both models yield almost identical asymptotic advection-dispersion characteristics, in which streamwise dispersion is dominated by randomness inherent in free bar morphodynamics. This randomness can result in a heavy-tailed PDF of waiting time. In addition, migrating bars may constrain the travel distance through temporary burial, causing a thin-tailed PDF of travel distance. The superdiffusive character of streamwise particle dispersion predicted by the model is attributable to the interaction of these two effects.
NASA Astrophysics Data System (ADS)
Iwasaki, Toshiki; Nelson, Jonathan; Shimizu, Yasuyuki; Parker, Gary
2017-04-01
Asymptotic characteristics of the transport of bed load tracer particles in rivers have been described by advection-dispersion equations. Here we perform numerical simulations designed to study the role of free bars, and more specifically single-row alternate bars, on streamwise tracer particle dispersion. In treating the conservation of tracer particle mass, we use two alternative formulations for the Exner equation of sediment mass conservation: the flux-based formulation, in which bed elevation varies with the divergence of the bed load transport rate, and the entrainment-based formulation, in which bed elevation changes with the net deposition rate. Under the condition of no net bed aggradation/degradation, a 1-D flux-based deterministic model that does not describe free bars yields no streamwise dispersion. The entrainment-based 1-D formulation, on the other hand, models stochasticity via the probability density function (PDF) of particle step length, and as a result does show tracer dispersion. When the formulation is generalized to 2-D to include free alternate bars, however, both models yield almost identical asymptotic advection-dispersion characteristics, in which streamwise dispersion is dominated by randomness inherent in free bar morphodynamics. This randomness can result in a heavy-tailed PDF of waiting time. In addition, migrating bars may constrain the travel distance through temporary burial, causing a thin-tailed PDF of travel distance. The superdiffusive character of streamwise particle dispersion predicted by the model is attributable to the interaction of these two effects.
Fate and Transport of Nanoparticles in Porous Media: A Numerical Study
NASA Astrophysics Data System (ADS)
Taghavy, Amir
Understanding the transport characteristics of NPs in natural soil systems is essential to revealing their potential impact on the food chain and groundwater. In addition, many nanotechnology-based remedial measures require effective transport of NPs through soil, which necessitates accurate understanding of their transport and retention behavior. Based upon the conceptual knowledge of environmental behavior of NPs, mathematical models can be developed to represent the coupling of processes that govern the fate of NPs in subsurface, serving as effective tools for risk assessment and/or design of remedial strategies. This work presents an innovative hybrid Eulerian-Lagrangian modeling technique for simulating the simultaneous reactive transport of nanoparticles (NPs) and dissolved constituents in porous media. Governing mechanisms considered in the conceptual model include particle-soil grain, particle-particle, particle-dissolved constituents, and particle- oil/water interface interactions. The main advantage of this technique, compared to conventional Eulerian models, lies in its ability to address non-uniformity in physicochemical particle characteristics. The developed numerical simulator was applied to investigate the fate and transport of NPs in a number of practical problems relevant to the subsurface environment. These problems included: (1) reductive dechlorination of chlorinated solvents by zero-valent iron nanoparticles (nZVI) in dense non-aqueous phase liquid (DNAPL) source zones; (2) reactive transport of dissolving silver nanoparticles (nAg) and the dissolved silver ions; (3) particle-particle interactions and their effects on the particle-soil grain interactions; and (4) influence of particle-oil/water interface interactions on NP transport in porous media.
NASA Technical Reports Server (NTRS)
Stefanescu, Doru M.; Moitra, Avijit; Kacar, A. Sedat; Dhindaw, Brij K.
1990-01-01
Directional solidification experiments in a Bridgman-type furnace were used to study particle behavior at the liquid/solid interface in aluminum metal matrix composites. Graphite or silicon-carbide particles were first dispersed in aluminum-base alloys via a mechanically stirred vortex. Then, 100-mm-diameter and 120-mm-long samples were cast in steel dies and used for directional solidification. The processing variables controlled were the direction and velocity of solidification and the temperature gradient at the interface. The material variables monitored were the interface energy, the liquid/particle density difference, the particle/liquid thermal conductivity ratio, and the volume fraction of particles. These properties were changed by selecting combinations of particles (graphite or silicon carbide) and alloys (Al-Cu, Al-Mg, Al-Ni). A model which consideres process thermodynamics, process kinetics (including the role of buoyant forces), and thermophysical properties was developed. Based on solidification direction and velocity, and on materials properties, four types of behavior were predicted. Sessile drop experiments were also used to determine some of the interface energies required in calculation with the proposed model. Experimental results compared favorably with model predictions.
NASA Astrophysics Data System (ADS)
Stefanescu, Doru M.; Moitra, Avijit; Kacar, A. Sedat; Dhindaw, Brij K.
1990-01-01
Directional solidification experiments in a Bridgman-type furnace were used to study particle behavior at the liquid/solid interface in aluminum metal matrix composites. Graphite or siliconcarbide particles were first dispersed in aluminum-base alloys via a mechanically stirred vortex. Then, 100-mm-diameter and 120-mm-long samples were cast in steel dies and used for directional solidification. The processing variables controlled were the direction and velocity of solidification and the temperature gradient at the interface. The material variables monitored were the interface energy, the liquid/particle density difference, the particle/liquid thermal conductivity ratio, and the volume fraction of particles. These properties were changed by selecting combinations of particles (graphite or silicon carbide) and alloys (Al-Cu, Al-Mg, Al-Ni). A model which considers process thermodynamics, process kinetics (including the role of buoyant forces), and thermophysical properties was developed. Based on solidification direction and velocity, and on materials properties, four types of behavior were predicted. Sessile drop experiments were also used to determine some of the interface energies required in calculation with the proposed model. Experimental results compared favorably with model predictions.
A nephron-based model of the kidneys for macro-to-micro α-particle dosimetry
NASA Astrophysics Data System (ADS)
Hobbs, Robert F.; Song, Hong; Huso, David L.; Sundel, Margaret H.; Sgouros, George
2012-07-01
Targeted α-particle therapy is a promising treatment modality for cancer. Due to the short path-length of α-particles, the potential efficacy and toxicity of these agents is best evaluated by microscale dosimetry calculations instead of whole-organ, absorbed fraction-based dosimetry. Yet time-integrated activity (TIA), the necessary input for dosimetry, can still only be quantified reliably at the organ or macroscopic level. We describe a nephron- and cellular-based kidney dosimetry model for α-particle radiopharmaceutical therapy, more suited to the short range and high linear energy transfer of α-particle emitters, which takes as input kidney or cortex TIA and through a macro to micro model-based methodology assigns TIA to micro-level kidney substructures. We apply a geometrical model to provide nephron-level S-values for a range of isotopes allowing for pre-clinical and clinical applications according to the medical internal radiation dosimetry (MIRD) schema. We assume that the relationship between whole-organ TIA and TIA apportioned to microscale substructures as measured in an appropriate pre-clinical mammalian model also applies to the human. In both, the pre-clinical and the human model, microscale substructures are described as a collection of simple geometrical shapes akin to those used in the Cristy-Eckerman phantoms for normal organs. Anatomical parameters are taken from the literature for a human model, while murine parameters are measured ex vivo. The murine histological slides also provide the data for volume of occupancy of the different compartments of the nephron in the kidney: glomerulus versus proximal tubule versus distal tubule. Monte Carlo simulations are run with activity placed in the different nephron compartments for several α-particle emitters currently under investigation in radiopharmaceutical therapy. The S-values were calculated for the α-emitters and their descendants between the different nephron compartments for both the human and murine models. The renal cortex and medulla S-values were also calculated and the results compared to traditional absorbed fraction calculations. The nephron model enables a more optimal implementation of treatment and is a critical step in understanding toxicity for human translation of targeted α-particle therapy. The S-values established here will enable a MIRD-type application of α-particle dosimetry for α-emitters, i.e. measuring the TIA in the kidney (or renal cortex) will provide meaningful and accurate nephron-level dosimetry.
A Local-Realistic Model of Quantum Mechanics Based on a Discrete Spacetime
NASA Astrophysics Data System (ADS)
Sciarretta, Antonio
2018-01-01
This paper presents a realistic, stochastic, and local model that reproduces nonrelativistic quantum mechanics (QM) results without using its mathematical formulation. The proposed model only uses integer-valued quantities and operations on probabilities, in particular assuming a discrete spacetime under the form of a Euclidean lattice. Individual (spinless) particle trajectories are described as random walks. Transition probabilities are simple functions of a few quantities that are either randomly associated to the particles during their preparation, or stored in the lattice nodes they visit during the walk. QM predictions are retrieved as probability distributions of similarly-prepared ensembles of particles. The scenarios considered to assess the model comprise of free particle, constant external force, harmonic oscillator, particle in a box, the Delta potential, particle on a ring, particle on a sphere and include quantization of energy levels and angular momentum, as well as momentum entanglement.
2010-01-01
Background The difficulty of directly measuring cellular dose is a significant obstacle to application of target tissue dosimetry for nanoparticle and microparticle toxicity assessment, particularly for in vitro systems. As a consequence, the target tissue paradigm for dosimetry and hazard assessment of nanoparticles has largely been ignored in favor of using metrics of exposure (e.g. μg particle/mL culture medium, particle surface area/mL, particle number/mL). We have developed a computational model of solution particokinetics (sedimentation, diffusion) and dosimetry for non-interacting spherical particles and their agglomerates in monolayer cell culture systems. Particle transport to cells is calculated by simultaneous solution of Stokes Law (sedimentation) and the Stokes-Einstein equation (diffusion). Results The In vitro Sedimentation, Diffusion and Dosimetry model (ISDD) was tested against measured transport rates or cellular doses for multiple sizes of polystyrene spheres (20-1100 nm), 35 nm amorphous silica, and large agglomerates of 30 nm iron oxide particles. Overall, without adjusting any parameters, model predicted cellular doses were in close agreement with the experimental data, differing from as little as 5% to as much as three-fold, but in most cases approximately two-fold, within the limits of the accuracy of the measurement systems. Applying the model, we generalize the effects of particle size, particle density, agglomeration state and agglomerate characteristics on target cell dosimetry in vitro. Conclusions Our results confirm our hypothesis that for liquid-based in vitro systems, the dose-rates and target cell doses for all particles are not equal; they can vary significantly, in direct contrast to the assumption of dose-equivalency implicit in the use of mass-based media concentrations as metrics of exposure for dose-response assessment. The difference between equivalent nominal media concentration exposures on a μg/mL basis and target cell doses on a particle surface area or number basis can be as high as three to six orders of magnitude. As a consequence, in vitro hazard assessments utilizing mass-based exposure metrics have inherently high errors where particle number or surface areas target cells doses are believed to drive response. The gold standard for particle dosimetry for in vitro nanotoxicology studies should be direct experimental measurement of the cellular content of the studied particle. However, where such measurements are impractical, unfeasible, and before such measurements become common, particle dosimetry models such as ISDD provide a valuable, immediately useful alternative, and eventually, an adjunct to such measurements. PMID:21118529
A Physical Based Formula for Calculating the Critical Stress of Snow Movement
NASA Astrophysics Data System (ADS)
He, S.; Ohara, N.
2016-12-01
In snow redistribution modeling, one of the most important parameters is the critical stress of snow movement, which is difficult to estimate from field data because it is influenced by various factors. In this study, a new formula for calculating critical stress of snow movement was derived based on the ice particle sintering process modeling and the moment balance of a snow particle. Through this formula, the influences of snow particle size, air temperature, and deposited time on the critical stress were explicitly taken into consideration. It was found that some of the model parameters were sensitive to the critical stress estimation through the sensitivity analysis using Sobol's method. The two sensitive parameters of the sintering process modeling were determined by a calibration-validation procedure using the observed snow flux data via FlowCapt. Based on the snow flux and metrological data observed at the ISAW stations (http://www.iav.ch), it was shown that the results of this formula were able to describe very well the evolution of the minimum friction wind speed required for the snow motion. This new formula suggested that when the snow just reaches the surface, the smaller snowflake can move easier than the larger particles. However, smaller snow particles require more force to move as the sintering between the snowflakes progresses. This implied that compact snow with small snow particles may be harder to erode by wind although smaller particles may have a higher chance to be suspended once they take off.
NASA Astrophysics Data System (ADS)
Brdar, S.; Seifert, A.
2018-01-01
We present a novel Monte-Carlo ice microphysics model, McSnow, to simulate the evolution of ice particles due to deposition, aggregation, riming, and sedimentation. The model is an application and extension of the super-droplet method of Shima et al. (2009) to the more complex problem of rimed ice particles and aggregates. For each individual super-particle, the ice mass, rime mass, rime volume, and the number of monomers are predicted establishing a four-dimensional particle-size distribution. The sensitivity of the model to various assumptions is discussed based on box model and one-dimensional simulations. We show that the Monte-Carlo method provides a feasible approach to tackle this high-dimensional problem. The largest uncertainty seems to be related to the treatment of the riming processes. This calls for additional field and laboratory measurements of partially rimed snowflakes.
ParticleCall: A particle filter for base calling in next-generation sequencing systems
2012-01-01
Background Next-generation sequencing systems are capable of rapid and cost-effective DNA sequencing, thus enabling routine sequencing tasks and taking us one step closer to personalized medicine. Accuracy and lengths of their reads, however, are yet to surpass those provided by the conventional Sanger sequencing method. This motivates the search for computationally efficient algorithms capable of reliable and accurate detection of the order of nucleotides in short DNA fragments from the acquired data. Results In this paper, we consider Illumina’s sequencing-by-synthesis platform which relies on reversible terminator chemistry and describe the acquired signal by reformulating its mathematical model as a Hidden Markov Model. Relying on this model and sequential Monte Carlo methods, we develop a parameter estimation and base calling scheme called ParticleCall. ParticleCall is tested on a data set obtained by sequencing phiX174 bacteriophage using Illumina’s Genome Analyzer II. The results show that the developed base calling scheme is significantly more computationally efficient than the best performing unsupervised method currently available, while achieving the same accuracy. Conclusions The proposed ParticleCall provides more accurate calls than the Illumina’s base calling algorithm, Bustard. At the same time, ParticleCall is significantly more computationally efficient than other recent schemes with similar performance, rendering it more feasible for high-throughput sequencing data analysis. Improvement of base calling accuracy will have immediate beneficial effects on the performance of downstream applications such as SNP and genotype calling. ParticleCall is freely available at https://sourceforge.net/projects/particlecall. PMID:22776067
Application of particle and lattice codes to simulation of hydraulic fracturing
NASA Astrophysics Data System (ADS)
Damjanac, Branko; Detournay, Christine; Cundall, Peter A.
2016-04-01
With the development of unconventional oil and gas reservoirs over the last 15 years, the understanding and capability to model the propagation of hydraulic fractures in inhomogeneous and naturally fractured reservoirs has become very important for the petroleum industry (but also for some other industries like mining and geothermal). Particle-based models provide advantages over other models and solutions for the simulation of fracturing of rock masses that cannot be assumed to be continuous and homogeneous. It has been demonstrated (Potyondy and Cundall Int J Rock Mech Min Sci Geomech Abstr 41:1329-1364, 2004) that particle models based on a simple force criterion for fracture propagation match theoretical solutions and scale effects derived using the principles of linear elastic fracture mechanics (LEFM). The challenge is how to apply these models effectively (i.e., with acceptable models sizes and computer run times) to the coupled hydro-mechanical problems of relevant time and length scales for practical field applications (i.e., reservoir scale and hours of injection time). A formulation of a fully coupled hydro-mechanical particle-based model and its application to the simulation of hydraulic treatment of unconventional reservoirs are presented. Model validation by comparing with available analytical asymptotic solutions (penny-shape crack) and some examples of field application (e.g., interaction with DFN) are also included.
A Multipopulation PSO Based Memetic Algorithm for Permutation Flow Shop Scheduling
Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang
2013-01-01
The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP. PMID:24453841
NASA Astrophysics Data System (ADS)
Cattani, Giorgio; Gaeta, Alessandra; di Menno di Bucchianico, Alessandro; de Santis, Antonella; Gaddi, Raffaela; Cusano, Mariacarmela; Ancona, Carla; Badaloni, Chiara; Forastiere, Francesco; Gariazzo, Claudio; Sozzi, Roberto; Inglessis, Marco; Silibello, Camillo; Salvatori, Elisabetta; Manes, Fausto; Cesaroni, Giulia; The Viias Study Group
2017-05-01
The health effects of long-term exposure to ultrafine particles (UFPs) are poorly understood. Data on spatial contrasts in ambient ultrafine particles (UFPs) concentrations are needed with fine resolution. This study aimed to assess the spatial variability of total particle number concentrations (PNC, a proxy for UFPs) in the city of Rome, Italy, using land use regression (LUR) models, and the correspondent exposure of population here living. PNC were measured using condensation particle counters at the building facade of 28 homes throughout the city. Three 7-day monitoring periods were carried out during cold, warm and intermediate seasons. Geographic Information System predictor variables, with buffers of varying size, were evaluated to model spatial variations of PNC. A stepwise forward selection procedure was used to develop a "base" linear regression model according to the European Study of Cohorts for Air Pollution Effects project methodology. Other variables were then included in more enhanced models and their capability of improving model performance was evaluated. Four LUR models were developed. Local variation in UFPs in the study area can be largely explained by the ratio of traffic intensity and distance to the nearest major road. The best model (adjusted R2 = 0.71; root mean square error = ±1,572 particles/cm³, leave one out cross validated R2 = 0.68) was achieved by regressing building and street configuration variables against residual from the "base" model, which added 3% more to the total variance explained. Urban green and population density in a 5,000 m buffer around each home were also relevant predictors. The spatial contrast in ambient PNC across the large conurbation of Rome, was successfully assessed. The average exposure of subjects living in the study area was 16,006 particles/cm³ (SD 2165 particles/cm³, range: 11,075-28,632 particles/cm³). A total of 203,886 subjects (16%) lives in Rome within 50 m from a high traffic road and they experience the highest exposure levels (18,229 particles/cm³). The results will be used to estimate the long-term health effects of ultrafine particle exposure of participants in Rome.
Numerical simulation of disperse particle flows on a graphics processing unit
NASA Astrophysics Data System (ADS)
Sierakowski, Adam J.
In both nature and technology, we commonly encounter solid particles being carried within fluid flows, from dust storms to sediment erosion and from food processing to energy generation. The motion of uncountably many particles in highly dynamic flow environments characterizes the tremendous complexity of such phenomena. While methods exist for the full-scale numerical simulation of such systems, current computational capabilities require the simplification of the numerical task with significant approximation using closure models widely recognized as insufficient. There is therefore a fundamental need for the investigation of the underlying physical processes governing these disperse particle flows. In the present work, we develop a new tool based on the Physalis method for the first-principles numerical simulation of thousands of particles (a small fraction of an entire disperse particle flow system) in order to assist in the search for new reduced-order closure models. We discuss numerous enhancements to the efficiency and stability of the Physalis method, which introduces the influence of spherical particles to a fixed-grid incompressible Navier-Stokes flow solver using a local analytic solution to the flow equations. Our first-principles investigation demands the modeling of unresolved length and time scales associated with particle collisions. We introduce a collision model alongside Physalis, incorporating lubrication effects and proposing a new nonlinearly damped Hertzian contact model. By reproducing experimental studies from the literature, we document extensive validation of the methods. We discuss the implementation of our methods for massively parallel computation using a graphics processing unit (GPU). We combine Eulerian grid-based algorithms with Lagrangian particle-based algorithms to achieve computational throughput up to 90 times faster than the legacy implementation of Physalis for a single central processing unit. By avoiding all data communication between the GPU and the host system during the simulation, we utilize with great efficacy the GPU hardware with which many high performance computing systems are currently equipped. We conclude by looking forward to the future of Physalis with multi-GPU parallelization in order to perform resolved disperse flow simulations of more than 100,000 particles and further advance the development of reduced-order closure models.
Three dimensional hair model by means particles using Blender
NASA Astrophysics Data System (ADS)
Alvarez-Cedillo, Jesús Antonio; Almanza-Nieto, Roberto; Herrera-Lozada, Juan Carlos
2010-09-01
The simulation and modeling of human hair is a process whose computational complexity is very large, this due to the large number of factors that must be calculated to give a realistic appearance. Generally, the method used in the film industry to simulate hair is based on particle handling graphics. In this paper we present a simple approximation of how to model human hair using particles in Blender. [Figure not available: see fulltext.
Modeling of magnetic hystereses in soft MREs filled with NdFeB particles
NASA Astrophysics Data System (ADS)
Kalina, K. A.; Brummund, J.; Metsch, P.; Kästner, M.; Borin, D. Yu; Linke, J. M.; Odenbach, S.
2017-10-01
Herein, we investigate the structure-property relationships of soft magnetorheological elastomers (MREs) filled with remanently magnetizable particles. The study is motivated from experimental results which indicate a large difference between the magnetization loops of soft MREs filled with NdFeB particles and the loops of such particles embedded in a comparatively stiff matrix, e.g. an epoxy resin. We present a microscale model for MREs based on a general continuum formulation of the magnetomechanical boundary value problem which is valid for finite strains. In particular, we develop an energetically consistent constitutive model for the hysteretic magnetization behavior of the magnetically hard particles. The microstructure is discretized and the problem is solved numerically in terms of a coupled nonlinear finite element approach. Since the local magnetic and mechanical fields are resolved explicitly inside the heterogeneous microstructure of the MRE, our model also accounts for interactions of particles close to each other. In order to connect the microscopic fields to effective macroscopic quantities of the MRE, a suitable computational homogenization scheme is used. Based on this modeling approach, it is demonstrated that the observable macroscopic behavior of the considered MREs results from the rotation of the embedded particles. Furthermore, the performed numerical simulations indicate that the reversion of the sample’s magnetization occurs due to a combination of particle rotations and internal domain conversion processes. All of our simulation results obtained for such materials are in a good qualitative agreement with the experiments.
NASA Astrophysics Data System (ADS)
Rai, Aakash C.; Lin, Chao-Hsin; Chen, Qingyan
2015-02-01
Ozone-terpene reactions are important sources of indoor ultrafine particles (UFPs), a potential health hazard for human beings. Humans themselves act as possible sites for ozone-initiated particle generation through reactions with squalene (a terpene) that is present in their skin, hair, and clothing. This investigation developed a numerical model to probe particle generation from ozone reactions with clothing worn by humans. The model was based on particle generation measured in an environmental chamber as well as physical formulations of particle nucleation, condensational growth, and deposition. In five out of the six test cases, the model was able to predict particle size distributions reasonably well. The failure in the remaining case demonstrated the fundamental limitations of nucleation models. The model that was developed was used to predict particle generation under various building and airliner cabin conditions. These predictions indicate that ozone reactions with human-worn clothing could be an important source of UFPs in densely occupied classrooms and airliner cabins. Those reactions could account for about 40% of the total UFPs measured on a Boeing 737-700 flight. The model predictions at this stage are indicative and should be improved further.
NASA Astrophysics Data System (ADS)
Zhao, Yue; Fairhurst, Michelle C.; Wingen, Lisa M.; Perraud, Véronique; Ezell, Michael J.; Finlayson-Pitts, Barbara J.
2017-04-01
The application of direct analysis in real-time mass spectrometry (DART-MS), which is finding increasing use in atmospheric chemistry, to two different laboratory model systems for airborne particles is investigated: (1) submicron C3-C7 dicarboxylic acid (diacid) particles reacted with gas-phase trimethylamine (TMA) or butylamine (BA) and (2) secondary organic aerosol (SOA) particles from the ozonolysis of α-cedrene. The diacid particles exhibit a clear odd-even pattern in their chemical reactivity toward TMA and BA, with the odd-carbon diacid particles being substantially more reactive than even ones. The ratio of base to diacid in reacted particles, determined using known diacid-base mixtures, was compared to that measured by high-resolution time-of-flight aerosol mass spectrometry (HR-ToF-AMS), which vaporizes the whole particle. Results show that DART-MS probes ˜ 30 nm of the surface layer, consistent with other studies on different systems. For α-cedrene SOA particles, it is shown that varying the temperature of the particle stream as it enters the DART-MS ionization region can distinguish between specific components with the same molecular mass but different vapor pressures. These results demonstrate the utility of DART-MS for (1) examining reactivity of heterogeneous model systems for atmospheric particles and (2) probing components of SOA particles based on volatility.
NASA Astrophysics Data System (ADS)
Zaichik, Leonid I.; Alipchenkov, Vladimir M.
2009-10-01
The purpose of this paper is twofold: (i) to advance and extend the statistical two-point models of pair dispersion and particle clustering in isotropic turbulence that were previously proposed by Zaichik and Alipchenkov (2003 Phys. Fluids15 1776-87 2007 Phys. Fluids 19, 113308) and (ii) to present some applications of these models. The models developed are based on a kinetic equation for the two-point probability density function of the relative velocity distribution of two particles. These models predict the pair relative velocity statistics and the preferential accumulation of heavy particles in stationary and decaying homogeneous isotropic turbulent flows. Moreover, the models are applied to predict the effect of particle clustering on turbulent collisions, sedimentation and intensity of microwave radiation as well as to calculate the mean filtered subgrid stress of the particulate phase. Model predictions are compared with direct numerical simulations and experimental measurements.
Particle-based simulations of self-motile suspensions
NASA Astrophysics Data System (ADS)
Hinz, Denis F.; Panchenko, Alexander; Kim, Tae-Yeon; Fried, Eliot
2015-11-01
A simple model for simulating flows of active suspensions is investigated. The approach is based on dissipative particle dynamics. While the model is potentially applicable to a wide range of self-propelled particle systems, the specific class of self-motile bacterial suspensions is considered as a modeling scenario. To mimic the rod-like geometry of a bacterium, two dissipative particle dynamics particles are connected by a stiff harmonic spring to form an aggregate dissipative particle dynamics molecule. Bacterial motility is modeled through a constant self-propulsion force applied along the axis of each such aggregate molecule. The model accounts for hydrodynamic interactions between self-propelled agents through the pairwise dissipative interactions conventional to dissipative particle dynamics. Numerical simulations are performed using a customized version of the open-source software package LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) software package. Detailed studies of the influence of agent concentration, pairwise dissipative interactions, and Stokes friction on the statistics of the system are provided. The simulations are used to explore the influence of hydrodynamic interactions in active suspensions. For high agent concentrations in combination with dominating pairwise dissipative forces, strongly correlated motion patterns and a fluid-like spectral distributions of kinetic energy are found. In contrast, systems dominated by Stokes friction exhibit weaker spatial correlations of the velocity field. These results indicate that hydrodynamic interactions may play an important role in the formation of spatially extended structures in active suspensions.
Multi-fluid CFD analysis in Process Engineering
NASA Astrophysics Data System (ADS)
Hjertager, B. H.
2017-12-01
An overview of modelling and simulation of flow processes in gas/particle and gas/liquid systems are presented. Particular emphasis is given to computational fluid dynamics (CFD) models that use the multi-dimensional multi-fluid techniques. Turbulence modelling strategies for gas/particle flows based on the kinetic theory for granular flows are given. Sub models for the interfacial transfer processes and chemical kinetics modelling are presented. Examples are shown for some gas/particle systems including flow and chemical reaction in risers as well as gas/liquid systems including bubble columns and stirred tanks.
* Murine Model of Progressive Orthopedic Wear Particle-Induced Chronic Inflammation and Osteolysis.
Pajarinen, Jukka; Nabeshima, Akira; Lin, Tzu-Hua; Sato, Taishi; Gibon, Emmanuel; Jämsen, Eemeli; Lu, Laura; Nathan, Karthik; Yao, Zhenyu; Goodman, Stuart B
2017-12-01
Periprosthetic osteolysis and subsequent aseptic loosening of total joint replacements are driven by byproducts of wear released from the implant. Wear particles cause macrophage-mediated inflammation that culminates with periprosthetic bone loss. Most current animal models of particle-induced osteolysis are based on the acute inflammatory reaction induced by wear debris, which is distinct from the slowly progressive clinical scenario. To address this limitation, we previously developed a murine model of periprosthetic osteolysis that is based on slow continuous delivery of wear particles into the murine distal femur over a period of 4 weeks. The particle delivery was accomplished by using subcutaneously implanted osmotic pumps and tubing, and a hollow titanium rod press-fit into the distal femur. In this study, we report a modification of our prior model in which particle delivery is extended to 8 weeks to better mimic the progressive development of periprosthetic osteolysis and allow the assessment of interventions in a setting where the chronic particle-induced osteolysis is already present at the initiation of the treatment. Compared to 4-week samples, extending the particle delivery to 8 weeks significantly exacerbated the local bone loss observed with μCT and the amount of both peri-implant F4/80 + macrophages and tartrate-resistant acid phosphatase-positive osteoclasts detected with immunohistochemical and histochemical staining. Furthermore, systemic recruitment of reporter macrophages to peri-implant tissues observed with bioluminescence imaging continued even at the later stages of particle-induced inflammation. This modified model system could provide new insights into the mechanisms of chronic inflammatory bone loss and be particularly useful in assessing the efficacy of treatments in a setting that resembles the clinical scenario of developing periprosthetic osteolysis more closely than currently existing model systems.
Specific heat capacity of molten salt-based alumina nanofluid.
Lu, Ming-Chang; Huang, Chien-Hsun
2013-06-21
There is no consensus on the effect of nanoparticle (NP) addition on the specific heat capacity (SHC) of fluids. In addition, the predictions from the existing model have a large discrepancy from the measured SHCs in nanofluids. We show that the SHC of the molten salt-based alumina nanofluid decreases with reducing particle size and increasing particle concentration. The NP size-dependent SHC is resulted from an augmentation of the nanolayer effect as particle size reduces. A model considering the nanolayer effect which supports the experimental results was proposed.
Specific heat capacity of molten salt-based alumina nanofluid
2013-01-01
There is no consensus on the effect of nanoparticle (NP) addition on the specific heat capacity (SHC) of fluids. In addition, the predictions from the existing model have a large discrepancy from the measured SHCs in nanofluids. We show that the SHC of the molten salt-based alumina nanofluid decreases with reducing particle size and increasing particle concentration. The NP size-dependent SHC is resulted from an augmentation of the nanolayer effect as particle size reduces. A model considering the nanolayer effect which supports the experimental results was proposed. PMID:23800321
Single particle analysis based on Zernike phase contrast transmission electron microscopy.
Danev, Radostin; Nagayama, Kuniaki
2008-02-01
We present the first application of Zernike phase-contrast transmission electron microscopy to single-particle 3D reconstruction of a protein, using GroEL chaperonin as the test specimen. We evaluated the performance of the technique by comparing 3D models derived from Zernike phase contrast imaging, with models from conventional underfocus phase contrast imaging. The same resolution, about 12A, was achieved by both imaging methods. The reconstruction based on Zernike phase contrast data required about 30% fewer particles. The advantages and prospects of each technique are discussed.
NASA Astrophysics Data System (ADS)
von Boetticher, Albrecht; Rickenmann, Dieter; McArdell, Brian; Kirchner, James W.
2017-04-01
Debris flows are dense flowing mixtures of water, clay, silt, sand and coarser particles. They are a common natural hazard in mountain regions and frequently cause severe damage. Modeling debris flows to design protection measures is still challenging due to the complex interactions within the inhomogeneous material mixture, and the sensitivity of the flow process to the channel geometry. The open-source, OpenFOAM-based finite-volume debris flow model debrisInterMixing (von Boetticher et al, 2016) defines rheology parameters based on the material properties of the debris flow mixture to reduce the number of free model parameters. As a simplification in this first model version, gravel was treated as a Coulomb-viscoplastic fluid, neglecting grain-to-grain collisions and the coupling between the coarser gravel grains and the interstitial fluid. Here we present an extension of that solver, accounting for the particle-to-particle and particle-to-boundary contacts with a Lagrangian Particle Simulation composed of spherical grains and a user-defined grain size distribution. The grain collisions of the Lagrangian particles add granular flow behavior to the finite-volume simulation of the continuous phases. The two-way coupling exchanges momentum between the phase-averaged flow in a finite volume cell, and among all individual particles contained in that cell, allowing the user to choose from a number of different drag models. The momentum exchange is implemented in the momentum equation and in the pressure equation (ensuring continuity) of the so-called PISO-loop, resulting in a stable 4-way coupling (particle-to-particle, particle-to-boundary, particle-to-fluid and fluid-to-particle) that represents the granular and viscous flow behavior of debris flow material. We will present simulations that illustrate the relative benefits and drawbacks of explicitly representing grain collisions, compared to the original debrisInterMixing solver.
NASA Astrophysics Data System (ADS)
Fan, Y. R.; Huang, G. H.; Baetz, B. W.; Li, Y. P.; Huang, K.
2017-06-01
In this study, a copula-based particle filter (CopPF) approach was developed for sequential hydrological data assimilation by considering parameter correlation structures. In CopPF, multivariate copulas are proposed to reflect parameter interdependence before the resampling procedure with new particles then being sampled from the obtained copulas. Such a process can overcome both particle degeneration and sample impoverishment. The applicability of CopPF is illustrated with three case studies using a two-parameter simplified model and two conceptual hydrologic models. The results for the simplified model indicate that model parameters are highly correlated in the data assimilation process, suggesting a demand for full description of their dependence structure. Synthetic experiments on hydrologic data assimilation indicate that CopPF can rejuvenate particle evolution in large spaces and thus achieve good performances with low sample size scenarios. The applicability of CopPF is further illustrated through two real-case studies. It is shown that, compared with traditional particle filter (PF) and particle Markov chain Monte Carlo (PMCMC) approaches, the proposed method can provide more accurate results for both deterministic and probabilistic prediction with a sample size of 100. Furthermore, the sample size would not significantly influence the performance of CopPF. Also, the copula resampling approach dominates parameter evolution in CopPF, with more than 50% of particles sampled by copulas in most sample size scenarios.
NASA Astrophysics Data System (ADS)
Yankee, S. J.; Pletka, B. J.
1993-09-01
Splats of hydroxylapatite (HA) and alumina were obtained via plasma spraying using systematically varied combinations of plasma velocity and temperature, which were achieved by altering the primary plasma gas flow rate and plasma gas composition. Particle size was also varied in the case of alumina. Splat spreading was quantified via computer- aided image analysis as a function of processing variations. A comparison of the predicted splat dimensions from a model developed by Madejski with experimental observations of HA and alumina splats was performed. The model tended to underestimate the HA splat sizes, suggesting that evaporation of smaller particles occurred under the chosen experimental conditions, and to overestimate the observed alumina splat dimensions. Based on this latter result and on the surface appearance of the substrates, incomplete melting appeared to take place in all but the smaller alumina particles. Analysis of the spreading data as a function of the processing variations indicated that the particle size as well as the plasma temperature and velocity influenced the extent of particle melting. Based on these data and other considerations, a physical model was developed that described the degree of particle melting in terms of material and processing parameters. The physical model correctly predicted the relative splat spreading behavior of HA and alumina, assuming that spreading was directly linked to the extent of particle melting.
Particle based plasma simulation for an ion engine discharge chamber
NASA Astrophysics Data System (ADS)
Mahalingam, Sudhakar
Design of the next generation of ion engines can benefit from detailed computer simulations of the plasma in the discharge chamber. In this work a complete particle based approach has been taken to model the discharge chamber plasma. This is the first time that simplifying continuum assumptions on the particle motion have not been made in a discharge chamber model. Because of the long mean free paths of the particles in the discharge chamber continuum models are questionable. The PIC-MCC model developed in this work tracks following particles: neutrals, singly charged ions, doubly charged ions, secondary electrons, and primary electrons. The trajectories of these particles are determined using the Newton-Lorentz's equation of motion including the effects of magnetic and electric fields. Particle collisions are determined using an MCC statistical technique. A large number of collision processes and particle wall interactions are included in the model. The magnetic fields produced by the permanent magnets are determined using Maxwell's equations. The electric fields are determined using an approximate input electric field coupled with a dynamic determination of the electric fields caused by the charged particles. In this work inclusion of the dynamic electric field calculation is made possible by using an inflated plasma permittivity value in the Poisson solver. This allows dynamic electric field calculation with minimal computational requirements in terms of both computer memory and run time. In addition, a number of other numerical procedures such as parallel processing have been implemented to shorten the computational time. The primary results are those modeling the discharge chamber of NASA's NSTAR ion engine at its full operating power. Convergence of numerical results such as total number of particles inside the discharge chamber, average energy of the plasma particles, discharge current, beam current and beam efficiency are obtained. Steady state results for the particle number density distributions and particle loss rates to the walls are presented. Comparisons of numerical results with experimental measurements such as currents and the particle number density distributions are made. Results from a parametric study and from an alternative magnetic field design are also given.
NASA Astrophysics Data System (ADS)
Chandramouli, Bharadwaj; Kamens, Richard M.
Decamethyl cyclopentasiloxane (D 5) and decamethyl tetrasiloxane (MD 2M) were injected into a smog chamber containing fine Arizona road dust particles (95% surface area <2.6 μM) and an urban smog atmosphere in the daytime. A photochemical reaction - gas-particle partitioning reaction scheme, was implemented to simulate the formation and gas-particle partitioning of hydroxyl oxidation products of D 5 and MD 2M. This scheme incorporated the reactions of D 5 and MD 2M into an existing urban smog chemical mechanism carbon bond IV and partitioned the products between gas and particle phase by treating gas-particle partitioning as a kinetic process and specifying an uptake and off-gassing rate. A photochemical model PKSS was used to simulate this set of reactions. A Langmuirian partitioning model was used to convert the measured and estimated mass-based partitioning coefficients ( KP) to a molar or volume-based form. The model simulations indicated that >99% of all product silanol formed in the gas-phase partition immediately to particle phase and the experimental data agreed with model predictions. One product, D 4TOH was observed and confirmed for the D 5 reaction and this system was modeled successfully. Experimental data was inadequate for MD 2M reaction products and it is likely that more than one product formed. The model set up a framework into which more reaction and partitioning steps can be easily added.
Glynne-Jones, Peter; Mishra, Puja P; Boltryk, Rosemary J; Hill, Martyn
2013-04-01
A finite element based method is presented for calculating the acoustic radiation force on arbitrarily shaped elastic and fluid particles. Importantly for future applications, this development will permit the modeling of acoustic forces on complex structures such as biological cells, and the interactions between them and other bodies. The model is based on a non-viscous approximation, allowing the results from an efficient, numerical, linear scattering model to provide the basis for the second-order forces. Simulation times are of the order of a few seconds for an axi-symmetric structure. The model is verified against a range of existing analytical solutions (typical accuracy better than 0.1%), including those for cylinders, elastic spheres that are of significant size compared to the acoustic wavelength, and spheroidal particles.
Modeling & processing of ceramic and polymer precursor ceramic matrix composite materials
NASA Astrophysics Data System (ADS)
Wang, Xiaolin
Synthesis and processing of novel materials with various advanced approaches have attracted much attention of engineers and scientists for the past thirty years. Many advanced materials display a number of exceptional properties and can be produced with different novel processing techniques. For example, AlN is a promising candidate for electronic, optical and opto-electronic applications due to its high thermal conductivity, high electrical resistivity, high acoustic wave velocity and large band gap. Large bulk AlN crystal can be produced by sublimation of AlN powder. Novel nonostructured multicomponent refractory metal-based ceramics (carbides, borides and nitrides) show a lot of exceptional mechanical, thermal and chemical properties, and can be easily produced by pyrolysis of suitable preceramic precursors mixed with metal particles. The objective of this work is to study sublimation and synthesis of AlN powder, and synthesis of SiC-based metal ceramics. For AlN sublimation crystal growth, we will focus on modeling the processes in the powder source that affect significantly the sublimation growth as a whole. To understand the powder porosity evolution and vapor transport during powder sublimation, the interplay between vapor transport and powder sublimation will be studied. A physics-based computational model will be developed considering powder sublimation and porosity evolution. Based on the proposed model, the effect of a central hole in the powder on the sublimation rate is studied and the result is compared to the case of powder without a hole. The effect of hole size on the sublimation rate will be studied. The effects of initial porosity, particle size and driving force on the sublimation rate are also studied. Moreover, the optimal growth condition for large diameter crystal quality and high growth rate will be determined. For synthesis of SiC-based metal ceramics, we will focus on developing a multi-scale process model to describe the dynamic behavior of filler particle reaction, microstructure evolution, at the microscale as well as transient fluid flow, heat transfer, and species transport at the macroscale. The model comprises of (i) a microscale model and (ii) a macroscale transport model, and aims to provide optimal conditions for the fabrication process of the ceramics. The porous media macroscale model for SiC-based metal-ceramic materials processing will be developed to understand the thermal polymer pyrolysis, chemical reaction of active fillers and transport phenomena in the porous media. The macroscale model will include heat and mass transfer, curing, pyrolysis, chemical reaction and crystallization in a mixture of preceramic polymers and submicron/nano-sized metal particles of uranium, zirconium, niobium, or hafnium. The effects of heating rate, sample size, size and volume ratio of the metal particles on the reaction rate and product uniformity will be studied. The microscale model will be developed for modeling the synthesis of SiC matrix and metal particles. The macroscale model provides thermal boundary conditions to the microscale model. The microscale model applies to repetitive units in the porous structure and describes mass transport, composition changes and motion of metal particles. The unit-cell is the representation unit of the source material, and it consists of several metal particles, SiC matrix and other components produced from the synthesis process. The reactions between different components, the microstructure evolution of the product will be considered. The effects of heating rate and metal particle size on species uniformity and microstructure are investigated.
Weighted Flow Algorithms (WFA) for stochastic particle coagulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeVille, R.E.L., E-mail: rdeville@illinois.edu; Riemer, N., E-mail: nriemer@illinois.edu; West, M., E-mail: mwest@illinois.edu
2011-09-20
Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangianmore » trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.« less
Weighted Flow Algorithms (WFA) for stochastic particle coagulation
NASA Astrophysics Data System (ADS)
DeVille, R. E. L.; Riemer, N.; West, M.
2011-09-01
Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangian trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodshire, Anna L.; Lawler, Michael J.; Zhao, Jun
New-particle formation (NPF) is a significant source of aerosol particles into the atmosphere. However, these particles are initially too small to have climatic importance and must grow, primarily through net uptake of low-volatility species, from diameters ∼ 1 to 30–100 nm in order to potentially impact climate. There are currently uncertainties in the physical and chemical processes associated with the growth of these freshly formed particles that lead to uncertainties in aerosol-climate modeling. Four main pathways for new-particle growth have been identified: condensation of sulfuric-acid vapor (and associated bases when available), condensation of organic vapors, uptake of organic acids through acid–base chemistrymore » in the particle phase, and accretion of organic molecules in the particle phase to create a lower-volatility compound that then contributes to the aerosol mass. The relative importance of each pathway is uncertain and is the focus of this work. The 2013 New Particle Formation Study (NPFS) measurement campaign took place at the DOE Southern Great Plains (SGP) facility in Lamont, Oklahoma, during spring 2013. Measured gas- and particle-phase compositions during these new-particle growth events suggest three distinct growth pathways: (1) growth by primarily organics, (2) growth by primarily sulfuric acid and ammonia, and (3) growth by primarily sulfuric acid and associated bases and organics. To supplement the measurements, we used the particle growth model MABNAG (Model for Acid–Base chemistry in NAnoparticle Growth) to gain further insight into the growth processes on these 3 days at SGP. MABNAG simulates growth from (1) sulfuric-acid condensation (and subsequent salt formation with ammonia or amines), (2) near-irreversible condensation from nonreactive extremely low-volatility organic compounds (ELVOCs), and (3) organic-acid condensation and subsequent salt formation with ammonia or amines. MABNAG is able to corroborate the observed differing growth pathways, while also predicting that ELVOCs contribute more to growth than organic salt formation. However, most MABNAG model simulations tend to underpredict the observed growth rates between 10 and 20 nm in diameter; this underprediction may come from neglecting the contributions to growth from semi-to-low-volatility species or accretion reactions. Our results suggest that in addition to sulfuric acid, ELVOCs are also very important for growth in this rural setting. We discuss the limitations of our study that arise from not accounting for semi- and low-volatility organics, as well as nitrogen-containing species beyond ammonia and amines in the model. Quantitatively understanding the overall budget, evolution, and thermodynamic properties of lower-volatility organics in the atmosphere will be essential for improving global aerosol models.« less
Thomas, Cory; Lu, Xinyu; Todd, Andrew; Raval, Yash; Tzeng, Tzuen-Rong; Song, Yongxin; Wang, Junsheng; Li, Dongqing; Xuan, Xiangchun
2017-01-01
The separation of particles and cells in a uniform mixture has been extensively studied as a necessity in many chemical and biomedical engineering and research fields. This work demonstrates a continuous charge-based separation of fluorescent and plain spherical polystyrene particles with comparable sizes in a ψ-shaped microchannel via the wall-induced electrical lift. The effects of both the direct current electric field in the main-branch and the electric field ratio in between the inlet branches for sheath fluid and particle mixture are investigated on this electrokinetic particle separation. A Lagrangian tracking method based theoretical model is also developed to understand the particle transport in the microchannel and simulate the parametric effects on particle separation. Moreover, the demonstrated charge-based separation is applied to a mixture of yeast cells and polystyrene particles with similar sizes. Good separation efficiency and purity are achieved for both the cells and the particles. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
One-to-one encapsulation based on alternating droplet generation
NASA Astrophysics Data System (ADS)
Hirama, Hirotada; Torii, Toru
2015-10-01
This paper reports the preparation of encapsulated particles as models of cells using an alternating droplet generation encapsulation method in which the number of particles in a droplet is controlled by a microchannel to achieve one-to-one encapsulation. Using a microchannel in which wettability is treated locally, the fluorescent particles used as models of cells were successfully encapsulated in uniform water-in-oil-in-water (W/O/W) emulsion droplets. Furthermore, 20% of the particle-containing droplets contained one particle. Additionally, when a surfactant with the appropriate properties was used, the fluorescent particles within each inner aqueous droplet were enclosed in the merged droplet by spontaneous droplet coalescence. This one-to-one encapsulation method based on alternating droplet generation could be used for a variety of applications, such as high-throughput single-cell assays, gene transfection into cells or one-to-one cell fusion.
One-to-one encapsulation based on alternating droplet generation.
Hirama, Hirotada; Torii, Toru
2015-10-21
This paper reports the preparation of encapsulated particles as models of cells using an alternating droplet generation encapsulation method in which the number of particles in a droplet is controlled by a microchannel to achieve one-to-one encapsulation. Using a microchannel in which wettability is treated locally, the fluorescent particles used as models of cells were successfully encapsulated in uniform water-in-oil-in-water (W/O/W) emulsion droplets. Furthermore, 20% of the particle-containing droplets contained one particle. Additionally, when a surfactant with the appropriate properties was used, the fluorescent particles within each inner aqueous droplet were enclosed in the merged droplet by spontaneous droplet coalescence. This one-to-one encapsulation method based on alternating droplet generation could be used for a variety of applications, such as high-throughput single-cell assays, gene transfection into cells or one-to-one cell fusion.
Inertial particle dynamics in large artery flows - Implications for modeling arterial embolisms.
Mukherjee, Debanjan; Shadden, Shawn C
2017-02-08
The complexity of inertial particle dynamics through swirling chaotic flow structures characteristic of pulsatile large-artery hemodynamics renders significant challenges in predictive understanding of transport of such particles. This is specifically crucial for arterial embolisms, where knowledge of embolus transport to major vascular beds helps in disease diagnosis and surgical planning. Using a computational framework built upon image-based CFD and discrete particle dynamics modeling, a multi-parameter sampling-based study was conducted on embolic particle dynamics and transport. The results highlighted the strong influence of material properties, embolus size, release instance, and embolus source on embolus distribution to the cerebral, renal and mesenteric, and ilio-femoral vasculature beds. The study also isolated the importance of shear-gradient lift, and elastohydrodynamic contact, in affecting embolic particle transport. Near-wall particle re-suspension due to lift alters aortogenic embolic particle dynamics significantly as compared to cardiogenic. The observations collectively indicated the complex interplay of particle inertia, fluid-particle density ratio, and wall collisions, with chaotic flow structures, which render the overall motion of the particles to be non-trivially dispersive in nature. Copyright © 2017 Elsevier Ltd. All rights reserved.
Predictability of the Lagrangian Motion in the Upper Ocean
NASA Astrophysics Data System (ADS)
Piterbarg, L. I.; Griffa, A.; Griffa, A.; Mariano, A. J.; Ozgokmen, T. M.; Ryan, E. H.
2001-12-01
The complex non-linear dynamics of the upper ocean leads to chaotic behavior of drifter trajectories in the ocean. Our study is focused on estimating the predictability limit for the position of an individual Lagrangian particle or a particle cluster based on the knowledge of mean currents and observations of nearby particles (predictors). The Lagrangian prediction problem, besides being a fundamental scientific problem, is also of great importance for practical applications such as search and rescue operations and for modeling the spread of fish larvae. A stochastic multi-particle model for the Lagrangian motion has been rigorously formulated and is a generalization of the well known "random flight" model for a single particle. Our model is mathematically consistent and includes a few easily interpreted parameters, such as the Lagrangian velocity decorrelation time scale, the turbulent velocity variance, and the velocity decorrelation radius, that can be estimated from data. The top Lyapunov exponent for an isotropic version of the model is explicitly expressed as a function of these parameters enabling us to approximate the predictability limit to first order. Lagrangian prediction errors for two new prediction algorithms are evaluated against simple algorithms and each other and are used to test the predictability limits of the stochastic model for isotropic turbulence. The first algorithm is based on a Kalman filter and uses the developed stochastic model. Its implementation for drifter clusters in both the Tropical Pacific and Adriatic Sea, showed good prediction skill over a period of 1-2 weeks. The prediction error is primarily a function of the data density, defined as the number of predictors within a velocity decorrelation spatial scale from the particle to be predicted. The second algorithm is model independent and is based on spatial regression considerations. Preliminary results, based on simulated, as well as, real data, indicate that it performs better than the Kalman-based algorithm in strong shear flows. An important component of our research is the optimal predictor location problem; Where should floats be launched in order to minimize the Lagrangian prediction error? Preliminary Lagrangian sampling results for different flow scenarios will be presented.
NASA Astrophysics Data System (ADS)
Ovaysi, S.; Piri, M.
2009-12-01
We present a three-dimensional fully dynamic parallel particle-based model for direct pore-level simulation of incompressible viscous fluid flow in disordered porous media. The model was developed from scratch and is capable of simulating flow directly in three-dimensional high-resolution microtomography images of naturally occurring or man-made porous systems. It reads the images as input where the position of the solid walls are given. The entire medium, i.e., solid and fluid, is then discretized using particles. The model is based on Moving Particle Semi-implicit (MPS) technique. We modify this technique in order to improve its stability. The model handles highly irregular fluid-solid boundaries effectively. It takes into account viscous pressure drop in addition to the gravity forces. It conserves mass and can automatically detect any false connectivity with fluid particles in the neighboring pores and throats. It includes a sophisticated algorithm to automatically split and merge particles to maintain hydraulic connectivity of extremely narrow conduits. Furthermore, it uses novel methods to handle particle inconsistencies and open boundaries. To handle the computational load, we present a fully parallel version of the model that runs on distributed memory computer clusters and exhibits excellent scalability. The model is used to simulate unsteady-state flow problems under different conditions starting from straight noncircular capillary tubes with different cross-sectional shapes, i.e., circular/elliptical, square/rectangular and triangular cross-sections. We compare the predicted dimensionless hydraulic conductances with the data available in the literature and observe an excellent agreement. We then test the scalability of our parallel model with two samples of an artificial sandstone, samples A and B, with different volumes and different distributions (non-uniform and uniform) of solid particles among the processors. An excellent linear scalability is obtained for sample B that has more uniform distribution of solid particles leading to a superior load balancing. The model is then used to simulate fluid flow directly in REV size three-dimensional x-ray images of a naturally occurring sandstone. We analyze the quality and consistency of the predicted flow behavior and calculate absolute permeability, which compares well with the available network modeling and Lattice-Boltzmann permeabilities available in the literature for the same sandstone. We show that the model conserves mass very well and is stable computationally even at very narrow fluid conduits. The transient- and the steady-state fluid flow patterns are presented as well as the steady-state flow rates to compute absolute permeability. Furthermore, we discuss the vital role of our adaptive particle resolution scheme in preserving the original pore connectivity of the samples and their narrow channels through splitting and merging of fluid particles.
Students' Visualisation of Chemical Reactions--Insights into the Particle Model and the Atomic Model
ERIC Educational Resources Information Center
Cheng, Maurice M. W.
2018-01-01
This paper reports on an interview study of 18 Grade 10-12 students' model-based reasoning of a chemical reaction: the reaction of magnesium and oxygen at the submicro level. It has been proposed that chemical reactions can be conceptualised using two models: (i) the "particle model," in which a reaction is regarded as the simple…
Steady-State Ion Beam Modeling with MICHELLE
NASA Astrophysics Data System (ADS)
Petillo, John
2003-10-01
There is a need to efficiently model ion beam physics for ion implantation, chemical vapor deposition, and ion thrusters. Common to all is the need for three-dimensional (3D) simulation of volumetric ion sources, ion acceleration, and optics, with the ability to model charge exchange of the ion beam with a background neutral gas. The two pieces of physics stand out as significant are the modeling of the volumetric source and charge exchange. In the MICHELLE code, the method for modeling the plasma sheath in ion sources assumes that the electron distribution function is a Maxwellian function of electrostatic potential over electron temperature. Charge exchange is the process by which a neutral background gas with a "fast" charged particle streaming through exchanges its electron with the charged particle. An efficient method for capturing this is essential, and the model presented is based on semi-empirical collision cross section functions. This appears to be the first steady-state 3D algorithm of its type to contain multiple generations of charge exchange, work with multiple species and multiple charge state beam/source particles simultaneously, take into account the self-consistent space charge effects, and track the subsequent fast neutral particles. The solution used by MICHELLE is to combine finite element analysis with particle-in-cell (PIC) methods. The basic physics model is based on the equilibrium steady-state application of the electrostatic particle-in-cell (PIC) approximation employing a conformal computational mesh. The foundation stems from the same basic model introduced in codes such as EGUN. Here, Poisson's equation is used to self-consistently include the effects of space charge on the fields, and the relativistic Lorentz equation is used to integrate the particle trajectories through those fields. The presentation will consider the complexity of modeling ion thrusters.
NASA Astrophysics Data System (ADS)
Kouznetsov, A.; Cully, C. M.; Knudsen, D. J.
2016-12-01
Changes in D-Region ionization caused by energetic particle precipitation are monitored by the Array for Broadband Observations of VLF/ELF Emissions (ABOVE) - a network of receivers deployed across Western Canada. The observed amplitudes and phases of subionospheric-propagating VLF signals from distant artificial transmitters depend sensitively on the free electron population created by precipitation of energetic charged particles. Those include both primary (electrons, protons and heavier ions) and secondary (cascades of ionized particles and electromagnetic radiation) components. We have designed and implemented a full-scale model to predict the received VLF signals based on first-principle charged particle transport calculations coupled to the Long Wavelength Propagation Capability (LWPC) software. Calculations of ionization rates and free electron densities are based on MCNP-6 (a general-purpose Monte Carlo N- Particle) software taking advantage of its capability of coupled neutron/photon/electron transport and novel library of cross-sections for low-energetic electron and photon interactions with matter. Cosmic ray calculations of background ionization are based on source spectra obtained both from PAMELA direct Cosmic Rays spectra measurements and based on the recently-implemented MCNP 6 galactic cosmic-ray source, scaled using our (Calgary) neutron monitor measurement results. Conversion from calculated fluxes (MCNP F4 tallies) to ionization rates for low-energy electrons are based on the total ionization cross-sections for oxygen and nitrogen molecules from the National Institute of Standard and Technology. We use our model to explore the complexity of the physical processes affecting VLF propagation.
Optimization of multi-objective micro-grid based on improved particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Zhang, Jian; Gan, Yang
2018-04-01
The paper presents a multi-objective optimal configuration model for independent micro-grid with the aim of economy and environmental protection. The Pareto solution set can be obtained by solving the multi-objective optimization configuration model of micro-grid with the improved particle swarm algorithm. The feasibility of the improved particle swarm optimization algorithm for multi-objective model is verified, which provides an important reference for multi-objective optimization of independent micro-grid.
Linking snowflake microstructure to multi-frequency radar observations
NASA Astrophysics Data System (ADS)
Leinonen, J.; Moisseev, D.; Nousiainen, T.
2013-04-01
Spherical or spheroidal particle shape models are commonly used to calculate numerically the radar backscattering properties of aggregate snowflakes. A more complicated and computationally intensive approach is to use detailed models of snowflake structure together with numerical scattering models that can operate on arbitrary particle shapes. Recent studies have shown that there can be significant differences between the results of these approaches. In this paper, an analytical model, based on the Rayleigh-Gans scattering theory, is formulated to explain this discrepancy in terms of the effect of discrete ice crystals that constitute the snowflake. The ice crystals cause small-scale inhomogeneities whose effects can be understood through the density autocorrelation function of the particle mass, which the Rayleigh-Gans theory connects to the function that gives the radar reflectivity as a function of frequency. The derived model is a weighted sum of two Gaussian functions. A term that corresponds to the average shape of the particle, similar to that given by the spheroidal shape model, dominates at low frequencies. At high frequencies, that term vanishes and is gradually replaced by the effect of the ice crystal monomers. The autocorrelation-based description of snowflake microstructure appears to be sufficient for multi-frequency radar studies. The link between multi-frequency radar observations and the particle microstructure can thus be used to infer particle properties from the observations.
NASA Astrophysics Data System (ADS)
Petsev, Nikolai D.; Leal, L. Gary; Shell, M. Scott
2017-12-01
Hybrid molecular-continuum simulation techniques afford a number of advantages for problems in the rapidly burgeoning area of nanoscale engineering and technology, though they are typically quite complex to implement and limited to single-component fluid systems. We describe an approach for modeling multicomponent hydrodynamic problems spanning multiple length scales when using particle-based descriptions for both the finely resolved (e.g., molecular dynamics) and coarse-grained (e.g., continuum) subregions within an overall simulation domain. This technique is based on the multiscale methodology previously developed for mesoscale binary fluids [N. D. Petsev, L. G. Leal, and M. S. Shell, J. Chem. Phys. 144, 084115 (2016)], simulated using a particle-based continuum method known as smoothed dissipative particle dynamics. An important application of this approach is the ability to perform coupled molecular dynamics (MD) and continuum modeling of molecularly miscible binary mixtures. In order to validate this technique, we investigate multicomponent hybrid MD-continuum simulations at equilibrium, as well as non-equilibrium cases featuring concentration gradients.
Network-based stochastic semisupervised learning.
Silva, Thiago Christiano; Zhao, Liang
2012-03-01
Semisupervised learning is a machine learning approach that is able to employ both labeled and unlabeled samples in the training process. In this paper, we propose a semisupervised data classification model based on a combined random-preferential walk of particles in a network (graph) constructed from the input dataset. The particles of the same class cooperate among themselves, while the particles of different classes compete with each other to propagate class labels to the whole network. A rigorous model definition is provided via a nonlinear stochastic dynamical system and a mathematical analysis of its behavior is carried out. A numerical validation presented in this paper confirms the theoretical predictions. An interesting feature brought by the competitive-cooperative mechanism is that the proposed model can achieve good classification rates while exhibiting low computational complexity order in comparison to other network-based semisupervised algorithms. Computer simulations conducted on synthetic and real-world datasets reveal the effectiveness of the model.
Influence of Ice Cloud Microphysics on Imager-Based Estimates of Earth's Radiation Budget
NASA Astrophysics Data System (ADS)
Loeb, N. G.; Kato, S.; Minnis, P.; Yang, P.; Sun-Mack, S.; Rose, F. G.; Hong, G.; Ham, S. H.
2016-12-01
A central objective of the Clouds and the Earth's Radiant Energy System (CERES) is to produce a long-term global climate data record of Earth's radiation budget from the TOA down to the surface along with the associated atmospheric and surface properties that influence it. CERES relies on a number of data sources, including broadband radiometers measuring incoming and reflected solar radiation and OLR, high-resolution spectral imagers, meteorological, aerosol and ozone assimilation data, and snow/sea-ice maps based on microwave radiometer data. While the TOA radiation budget is largely determined directly from accurate broadband radiometer measurements, the surface radiation budget is derived indirectly through radiative transfer model calculations initialized using imager-based cloud and aerosol retrievals and meteorological assimilation data. Because ice cloud particles exhibit a wide range of shapes, sizes and habits that cannot be independently retrieved a priori from passive visible/infrared imager measurements, assumptions about the scattering properties of ice clouds are necessary in order to retrieve ice cloud optical properties (e.g., optical depth) from imager radiances and to compute broadband radiative fluxes. This presentation will examine how the choice of an ice cloud particle model impacts computed shortwave (SW) radiative fluxes at the top-of-atmosphere (TOA) and surface. The ice cloud particle models considered correspond to those from prior, current and future CERES data product versions. During the CERES Edition2 (and Edition3) processing, ice cloud particles were assumed to be smooth hexagonal columns. In the Edition4, roughened hexagonal columns are assumed. The CERES team is now working on implementing in a future version an ice cloud particle model comprised of a two-habit ice cloud model consisting of roughened hexagonal columns and aggregates of roughened columnar elements. In each case, we use the same ice particle model in both the imager-based cloud retrievals (inverse problem) and the computed radiative fluxes (forward calculation). In addition to comparing radiative fluxes using the different ice cloud particle models, we also compare instantaneous TOA flux calculations with those observed by the CERES instrument.
Discrete bivariate population balance modelling of heteroaggregation processes.
Rollié, Sascha; Briesen, Heiko; Sundmacher, Kai
2009-08-15
Heteroaggregation in binary particle mixtures was simulated with a discrete population balance model in terms of two internal coordinates describing the particle properties. The considered particle species are of different size and zeta-potential. Property space is reduced with a semi-heuristic approach to enable an efficient solution. Aggregation rates are based on deterministic models for Brownian motion and stability, under consideration of DLVO interaction potentials. A charge-balance kernel is presented, relating the electrostatic surface potential to the property space by a simple charge balance. Parameter sensitivity with respect to the fractal dimension, aggregate size, hydrodynamic correction, ionic strength and absolute particle concentration was assessed. Results were compared to simulations with the literature kernel based on geometric coverage effects for clusters with heterogeneous surface properties. In both cases electrostatic phenomena, which dominate the aggregation process, show identical trends: impeded cluster-cluster aggregation at low particle mixing ratio (1:1), restabilisation at high mixing ratios (100:1) and formation of complex clusters for intermediate ratios (10:1). The particle mixing ratio controls the surface coverage extent of the larger particle species. Simulation results are compared to experimental flow cytometric data and show very satisfactory agreement.
Mesoscale Particle-Based Model of Electrophoresis
Giera, Brian; Zepeda-Ruiz, Luis A.; Pascall, Andrew J.; ...
2015-07-31
Here, we develop and evaluate a semi-empirical particle-based model of electrophoresis using extensive mesoscale simulations. We parameterize the model using only measurable quantities from a broad set of colloidal suspensions with properties that span the experimentally relevant regime. With sufficient sampling, simulated diffusivities and electrophoretic velocities match predictions of the ubiquitous Stokes-Einstein and Henry equations, respectively. This agreement holds for non-polar and aqueous solvents or ionic liquid colloidal suspensions under a wide range of applied electric fields.
Mesoscale Particle-Based Model of Electrophoresis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giera, Brian; Zepeda-Ruiz, Luis A.; Pascall, Andrew J.
Here, we develop and evaluate a semi-empirical particle-based model of electrophoresis using extensive mesoscale simulations. We parameterize the model using only measurable quantities from a broad set of colloidal suspensions with properties that span the experimentally relevant regime. With sufficient sampling, simulated diffusivities and electrophoretic velocities match predictions of the ubiquitous Stokes-Einstein and Henry equations, respectively. This agreement holds for non-polar and aqueous solvents or ionic liquid colloidal suspensions under a wide range of applied electric fields.
Reactive multi-particle collision dynamics with reactive boundary conditions
NASA Astrophysics Data System (ADS)
Sayyidmousavi, Alireza; Rohlf, Katrin
2018-07-01
In the present study, an off-lattice particle-based method called the reactive multi-particle collision (RMPC) dynamics is extended to model reaction-diffusion systems with reactive boundary conditions in which the a priori diffusion coefficient of the particles needs to be maintained throughout the simulation. To this end, the authors have made use of the so-called bath particles whose purpose is only to ensure proper diffusion of the main particles in the system. In order to model partial adsorption by a reactive boundary in the RMPC, the probability of a particle being adsorbed, once it hits the boundary, is calculated by drawing an analogy between the RMPC and Brownian Dynamics. The main advantages of the RMPC compared to other molecular based methods are less computational cost as well as conservation of mass, energy and momentum in the collision and free streaming steps. The proposed approach is tested on three reaction-diffusion systems and very good agreement with the solutions to their corresponding partial differential equations is observed.
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Wilson, J. W.; Shinn, J. L.; Tripathi, R. K.
1998-01-01
The transport properties of galactic cosmic rays (GCR) in the atmosphere, material structures, and human body (self-shielding) am of interest in risk assessment for supersonic and subsonic aircraft and for space travel in low-Earth orbit and on interplanetary missions. Nuclear reactions, such as knockout and fragmentation, present large modifications of particle type and energies of the galactic cosmic rays in penetrating materials. We make an assessment of the current nuclear reaction models and improvements in these model for developing required transport code data bases. A new fragmentation data base (QMSFRG) based on microscopic models is compared to the NUCFRG2 model and implications for shield assessment made using the HZETRN radiation transport code. For deep penetration problems, the build-up of light particles, such as nucleons, light clusters and mesons from nuclear reactions in conjunction with the absorption of the heavy ions, leads to the dominance of the charge Z = 0, 1, and 2 hadrons in the exposures at large penetration depths. Light particles are produced through nuclear or cluster knockout and in evaporation events with characteristically distinct spectra which play unique roles in the build-up of secondary radiation's in shielding. We describe models of light particle production in nucleon and heavy ion induced reactions and make an assessment of the importance of light particle multiplicity and spectral parameters in these exposures.
Many particle approximation of the Aw-Rascle-Zhang second order model for vehicular traffic.
Francesco, Marco Di; Fagioli, Simone; Rosini, Massimiliano D
2017-02-01
We consider the follow-the-leader approximation of the Aw-Rascle-Zhang (ARZ) model for traffic flow in a multi population formulation. We prove rigorous convergence to weak solutions of the ARZ system in the many particle limit in presence of vacuum. The result is based on uniform BV estimates on the discrete particle velocity. We complement our result with numerical simulations of the particle method compared with some exact solutions to the Riemann problem of the ARZ system.
ReaDDy - A Software for Particle-Based Reaction-Diffusion Dynamics in Crowded Cellular Environments
Schöneberg, Johannes; Noé, Frank
2013-01-01
We introduce the software package ReaDDy for simulation of detailed spatiotemporal mechanisms of dynamical processes in the cell, based on reaction-diffusion dynamics with particle resolution. In contrast to other particle-based reaction kinetics programs, ReaDDy supports particle interaction potentials. This permits effects such as space exclusion, molecular crowding and aggregation to be modeled. The biomolecules simulated can be represented as a sphere, or as a more complex geometry such as a domain structure or polymer chain. ReaDDy bridges the gap between small-scale but highly detailed molecular dynamics or Brownian dynamics simulations and large-scale but little-detailed reaction kinetics simulations. ReaDDy has a modular design that enables the exchange of the computing core by efficient platform-specific implementations or dynamical models that are different from Brownian dynamics. PMID:24040218
NASA Technical Reports Server (NTRS)
Wittenberger, J. D.; Behrendt, D. R.
1973-01-01
Diffusional creep in a polycrystalline alloy containing second-phase particles can disrupt the particle morphology. For alloys which depend on the particle distribution for strength, changes in the particle morphology can affect the mechanical properties. Recent observations of diffusional creep in alloys containing soluble particles (gamma-prime strengthened Ni base alloys) and inert particles have been reexamined in light of the basic mechanisms of diffusional creep, and a generalized model of this effect is proposed. The model indicates that diffusional creep will generally result in particle-free regions in the vicinity of grain boundaries serving as net vacancy sources. The factors which control the changes in second-phase morphology have been identified, and methods of reducing the effects of diffusional creep are suggested.
NASA Astrophysics Data System (ADS)
Gorokhovski, Mikhael; Zamansky, Rémi
2018-03-01
Consistently with observations from recent experiments and DNS, we focus on the effects of strong velocity increments at small spatial scales for the simulation of the drag force on particles in high Reynolds number flows. In this paper, we decompose the instantaneous particle acceleration in its systematic and residual parts. The first part is given by the steady-drag force obtained from the large-scale energy-containing motions, explicitly resolved by the simulation, while the second denotes the random contribution due to small unresolved turbulent scales. This is in contrast with standard drag models in which the turbulent microstructures advected by the large-scale eddies are deemed to be filtered by the particle inertia. In our paper, the residual term is introduced as the particle acceleration conditionally averaged on the instantaneous dissipation rate along the particle path. The latter is modeled from a log-normal stochastic process with locally defined parameters obtained from the resolved field. The residual term is supplemented by an orientation model which is given by a random walk on the unit sphere. We propose specific models for particles with diameter smaller and larger size than the Kolmogorov scale. In the case of the small particles, the model is assessed by comparison with direct numerical simulation (DNS). Results showed that by introducing this modeling, the particle acceleration statistics from DNS is predicted fairly well, in contrast with the standard LES approach. For the particles bigger than the Kolmogorov scale, we propose a fluctuating particle response time, based on an eddy viscosity estimated at the particle scale. This model gives stretched tails of the particle acceleration distribution and dependence of its variance consistent with experiments.
New Predictions of the Jovian Aurora: Location, Latitudinal Width, and Intensity
NASA Technical Reports Server (NTRS)
Tsurutani, B. T.; Arballo, J. K.; Ho, C. M.; Lin, N. G.; Kellogg, P. J.; Cornileau-Wehrlin, N.; Krupp, N.
1995-01-01
A model/theory for the Jovian aurora is formed based on a similar model for the dayside aurora at Earth and recent Ulysses field and particle measurements at Jupiter. Items discussed are plasma boundary layer, wave-particle resonant interactions, and the model's prediction of the aurora's location, latitudinal width, and intensity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, C.; Potts, I.; Reeks, M. W., E-mail: mike.reeks@ncl.ac.uk
We present a simple stochastic quadrant model for calculating the transport and deposition of heavy particles in a fully developed turbulent boundary layer based on the statistics of wall-normal fluid velocity fluctuations obtained from a fully developed channel flow. Individual particles are tracked through the boundary layer via their interactions with a succession of random eddies found in each of the quadrants of the fluid Reynolds shear stress domain in a homogeneous Markov chain process. In this way, we are able to account directly for the influence of ejection and sweeping events as others have done but without resorting tomore » the use of adjustable parameters. Deposition rate predictions for a wide range of heavy particles predicted by the model compare well with benchmark experimental measurements. In addition, deposition rates are compared with those obtained from continuous random walk models and Langevin equation based ejection and sweep models which noticeably give significantly lower deposition rates. Various statistics related to the particle near wall behavior are also presented. Finally, we consider the model limitations in using the model to calculate deposition in more complex flows where the near wall turbulence may be significantly different.« less
Multiple new-particle growth pathways observed at the US DOE Southern Great Plains field site
Hodshire, Anna L.; Lawler, Michael J.; Zhao, Jun; ...
2016-07-28
New-particle formation (NPF) is a significant source of aerosol particles into the atmosphere. However, these particles are initially too small to have climatic importance and must grow, primarily through net uptake of low-volatility species, from diameters ∼ 1 to 30–100 nm in order to potentially impact climate. There are currently uncertainties in the physical and chemical processes associated with the growth of these freshly formed particles that lead to uncertainties in aerosol-climate modeling. Four main pathways for new-particle growth have been identified: condensation of sulfuric-acid vapor (and associated bases when available), condensation of organic vapors, uptake of organic acids through acid–base chemistrymore » in the particle phase, and accretion of organic molecules in the particle phase to create a lower-volatility compound that then contributes to the aerosol mass. The relative importance of each pathway is uncertain and is the focus of this work. The 2013 New Particle Formation Study (NPFS) measurement campaign took place at the DOE Southern Great Plains (SGP) facility in Lamont, Oklahoma, during spring 2013. Measured gas- and particle-phase compositions during these new-particle growth events suggest three distinct growth pathways: (1) growth by primarily organics, (2) growth by primarily sulfuric acid and ammonia, and (3) growth by primarily sulfuric acid and associated bases and organics. To supplement the measurements, we used the particle growth model MABNAG (Model for Acid–Base chemistry in NAnoparticle Growth) to gain further insight into the growth processes on these 3 days at SGP. MABNAG simulates growth from (1) sulfuric-acid condensation (and subsequent salt formation with ammonia or amines), (2) near-irreversible condensation from nonreactive extremely low-volatility organic compounds (ELVOCs), and (3) organic-acid condensation and subsequent salt formation with ammonia or amines. MABNAG is able to corroborate the observed differing growth pathways, while also predicting that ELVOCs contribute more to growth than organic salt formation. However, most MABNAG model simulations tend to underpredict the observed growth rates between 10 and 20 nm in diameter; this underprediction may come from neglecting the contributions to growth from semi-to-low-volatility species or accretion reactions. Our results suggest that in addition to sulfuric acid, ELVOCs are also very important for growth in this rural setting. We discuss the limitations of our study that arise from not accounting for semi- and low-volatility organics, as well as nitrogen-containing species beyond ammonia and amines in the model. Quantitatively understanding the overall budget, evolution, and thermodynamic properties of lower-volatility organics in the atmosphere will be essential for improving global aerosol models.« less
Landsmann, Steve; Maegli, Alexandra E; Trottmann, Matthias; Battaglia, Corsin; Weidenkaff, Anke; Pokrant, Simone
2015-10-26
Semiconductor powders are perfectly suited for the scalable fabrication of particle-based photoelectrodes, which can be used to split water using the sun as a renewable energy source. This systematic study is focused on variation of the electrode design using LaTiO2 N as a model system. We present the influence of particle morphology on charge separation and transport properties combined with post-treatment procedures, such as necking and size-dependent co-catalyst loading. Five rules are proposed to guide the design of high-performance particle-based photoanodes by adding or varying several process steps. We also specify how much efficiency improvement can be achieved using each of the steps. For example, implementation of a connectivity network and surface area enhancement leads to thirty times improvement in efficiency and co-catalyst loading achieves an improvement in efficiency by a factor of seven. Some of these guidelines can be adapted to non-particle-based photoelectrodes. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Modeling a Single SEP Event from Multiple Vantage Points Using the iPATH Model
NASA Astrophysics Data System (ADS)
Hu, Junxiang; Li, Gang; Fu, Shuai; Zank, Gary; Ao, Xianzhi
2018-02-01
Using the recently extended 2D improved Particle Acceleration and Transport in the Heliosphere (iPATH) model, we model an example gradual solar energetic particle event as observed at multiple locations. Protons and ions that are energized via the diffusive shock acceleration mechanism are followed at a 2D coronal mass ejection-driven shock where the shock geometry varies across the shock front. The subsequent transport of energetic particles, including cross-field diffusion, is modeled by a Monte Carlo code that is based on a stochastic differential equation method. Time intensity profiles and particle spectra at multiple locations and different radial distances, separated in longitudes, are presented. The results shown here are relevant to the upcoming Parker Solar Probe mission.
Multidimensional Multiphysics Simulation of TRISO Particle Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. D. Hales; R. L. Williamson; S. R. Novascone
2013-11-01
Multidimensional multiphysics analysis of TRISO-coated particle fuel using the BISON finite-element based nuclear fuels code is described. The governing equations and material models applicable to particle fuel and implemented in BISON are outlined. Code verification based on a recent IAEA benchmarking exercise is described, and excellant comparisons are reported. Multiple TRISO-coated particles of increasing geometric complexity are considered. It is shown that the code's ability to perform large-scale parallel computations permits application to complex 3D phenomena while very efficient solutions for either 1D spherically symmetric or 2D axisymmetric geometries are straightforward. Additionally, the flexibility to easily include new physical andmore » material models and uncomplicated ability to couple to lower length scale simulations makes BISON a powerful tool for simulation of coated-particle fuel. Future code development activities and potential applications are identified.« less
Engelhardt, Lucas; Röhm, Martina; Mavoungou, Chrystelle; Schindowski, Katharina; Schafmeister, Annette; Simon, Ulrich
2016-06-01
Aerosol particle deposition in the human nasal cavity is of high interest in particular for intranasal central nervous system (CNS) drug delivery via the olfactory cleft. The objective of this study was the development and comparison of a numerical and experimental model to investigate various parameters for olfactory particle deposition within the complex anatomical nasal geometry. Based on a standardized nasal cavity, a computational fluid and particle dynamics (CFPD) model was developed that enables the variation and optimization of different parameters, which were validated by in vitro experiments using a constructed rapid-prototyped human nose model. For various flow rates (5 to 40 l/min) and particle sizes (1 to 10 μm), the airflow velocities, the calculated particle airflow patterns and the particle deposition correlated very well with the experiment. Particle deposition was investigated numerically by varying particle sizes at constant flow rate and vice versa assuming the particle size distribution of the used nebulizer. The developed CFPD model could be directly translated to the in vitro results. Hence, it can be applied for parameter screening and will contribute to the improvement of aerosol particle deposition at the olfactory cleft for CNS drug delivery in particular for biopharmaceuticals.
Modeling and Simulation of Cardiogenic Embolic Particle Transport to the Brain
NASA Astrophysics Data System (ADS)
Mukherjee, Debanjan; Jani, Neel; Shadden, Shawn C.
2015-11-01
Emboli are aggregates of cells, proteins, or fatty material, which travel along arteries distal to the point of their origin, and can potentially block blood flow to the brain, causing stroke. This is a prominent mechanism of stroke, accounting for about a third of all cases, with the heart being a prominent source of these emboli. This work presents our investigations towards developing numerical simulation frameworks for modeling the transport of embolic particles originating from the heart along the major arteries supplying the brain. The simulations are based on combining discrete particle method with image based computational fluid dynamics. Simulations of unsteady, pulsatile hemodynamics, and embolic particle transport within patient-specific geometries, with physiological boundary conditions, are presented. The analysis is focused on elucidating the distribution of particles, transport of particles in the head across the major cerebral arteries connected at the Circle of Willis, the role of hemodynamic variables on the particle trajectories, and the effect of considering one-way vs. two-way coupling methods for the particle-fluid momentum exchange. These investigations are aimed at advancing our understanding of embolic stroke using computational fluid dynamics techniques. This research was supported by the American Heart Association grant titled ``Embolic Stroke: Anatomic and Physiologic Insights from Image-Based CFD.''
NASA Astrophysics Data System (ADS)
Vecil, Francesco; Lafitte, Pauline; Rosado Linares, Jesús
2013-10-01
We study at particle and kinetic level a collective behavior model based on three phenomena: self-propulsion, friction (Rayleigh effect) and an attractive/repulsive (Morse) potential rescaled so that the total mass of the system remains constant independently of the number of particles N. In the first part of the paper, we introduce the particle model: the agents are numbered and described by their position and velocity. We identify five parameters that govern the possible asymptotic states for this system (clumps, spheres, dispersion, mills, rigid-body rotation, flocks) and perform a numerical analysis on the 3D setting. Then, in the second part of the paper, we describe the kinetic system derived as the limit from the particle model as N tends to infinity; we propose, in 1D, a numerical scheme for the simulations, and perform a numerical analysis devoted to trying to recover asymptotically patterns similar to those emerging for the equivalent particle systems, when particles originally evolved on a circle.
Rengasamy, Samy; Miller, Adam; Eimer, Benjamin C
2011-01-01
N95 particulate filtering facepiece respirators are certified by measuring penetration levels photometrically with a presumed severe case test method using charge neutralized NaCl aerosols at 85 L/min. However, penetration values obtained by photometric methods have not been compared with count-based methods using contemporary respirators composed of electrostatic filter media and challenged with both generated and ambient aerosols. To better understand the effects of key test parameters (e.g., particle charge, detection method), initial penetration levels for five N95 model filtering facepiece respirators were measured using NaCl aerosols with the aerosol challenge and test equipment employed in the NIOSH respirator certification method (photometric) and compared with an ultrafine condensation particle counter method (count based) for the same NaCl aerosols as well as for ambient room air particles. Penetrations using the NIOSH test method were several-fold less than the penetrations obtained by the ultrafine condensation particle counter for NaCl aerosols as well as for room particles indicating that penetration measurement based on particle counting offers a more difficult challenge than the photometric method, which lacks sensitivity for particles < 100 nm. All five N95 models showed the most penetrating particle size around 50 nm for room air particles with or without charge neutralization, and at 200 nm for singly charged NaCl monodisperse particles. Room air with fewer charged particles and an overwhelming number of neutral particles contributed to the most penetrating particle size in the 50 nm range, indicating that the charge state for the majority of test particles determines the MPPS. Data suggest that the NIOSH respirator certification protocol employing the photometric method may not be a more challenging aerosol test method. Filter penetrations can vary among workplaces with different particle size distributions, which suggests the need for the development of new or revised "more challenging" aerosol test methods for NIOSH certification of respirators.
Bayesian approach to MSD-based analysis of particle motion in live cells.
Monnier, Nilah; Guo, Syuan-Ming; Mori, Masashi; He, Jun; Lénárt, Péter; Bathe, Mark
2012-08-08
Quantitative tracking of particle motion using live-cell imaging is a powerful approach to understanding the mechanism of transport of biological molecules, organelles, and cells. However, inferring complex stochastic motion models from single-particle trajectories in an objective manner is nontrivial due to noise from sampling limitations and biological heterogeneity. Here, we present a systematic Bayesian approach to multiple-hypothesis testing of a general set of competing motion models based on particle mean-square displacements that automatically classifies particle motion, properly accounting for sampling limitations and correlated noise while appropriately penalizing model complexity according to Occam's Razor to avoid over-fitting. We test the procedure rigorously using simulated trajectories for which the underlying physical process is known, demonstrating that it chooses the simplest physical model that explains the observed data. Further, we show that computed model probabilities provide a reliability test for the downstream biological interpretation of associated parameter values. We subsequently illustrate the broad utility of the approach by applying it to disparate biological systems including experimental particle trajectories from chromosomes, kinetochores, and membrane receptors undergoing a variety of complex motions. This automated and objective Bayesian framework easily scales to large numbers of particle trajectories, making it ideal for classifying the complex motion of large numbers of single molecules and cells from high-throughput screens, as well as single-cell-, tissue-, and organism-level studies. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Internet Based Simulations of Debris Dispersion of Shuttle Launch
NASA Technical Reports Server (NTRS)
Bardina, Jorge; Thirumalainambi, Rajkumar
2004-01-01
The debris dispersion model (which dispersion model?) is so heterogeneous and interrelated with various factors, 3D graphics combined with physical models are useful in understanding the complexity of launch and range operations. Modeling and simulation in this area mainly focuses on orbital dynamics and range safety concepts, including destruct limits, telemetry and tracking, and population risk. Particle explosion modeling is the process of simulating an explosion by breaking the rocket into many pieces. The particles are scattered throughout their motion using the laws of physics eventually coming to rest. The size of the foot print explains the type of explosion and distribution of the particles. The shuttle launch and range operations in this paper are discussed based on the operations of the Kennedy Space Center, Florida, USA. Java 3D graphics provides geometric and visual content with suitable modeling behaviors of Shuttle launches.
NASA Astrophysics Data System (ADS)
Liu, Haitao
The objective of the present study is to investigate damage mechanisms and thermal residual stresses of composites, and to establish the frameworks to model the particle-reinforced metal matrix composites with particle-matrix interfacial debonding, particle cracking or thermal residual stresses. An evolutionary interfacial debonding model is proposed for the composites with spheroidal particles. The construction of the equivalent stiffness is based on the fact that when debonding occurs in a certain direction, the load-transfer ability will lose in that direction. By using this equivalent method, the interfacial debonding problem can be converted into a composite problem with perfectly bonded inclusions. Considering the interfacial debonding is a progressive process in which the debonding area increases in proportion to external loading, a progressive interfacial debonding model is proposed. In this model, the relation between external loading and the debonding area is established using a normal stress controlled debonding criterion. Furthermore, an equivalent orthotropic stiffness tensor is constructed based on the debonding areas. This model is able to study the composites with randomly distributed spherical particles. The double-inclusion theory is recalled to model the particle cracking problems. Cracks inside particles are treated as penny-shape particles with zero stiffness. The disturbed stress field due to the existence of a double-inclusion is expressed explicitly. Finally, a thermal mismatch eigenstrain is introduced to simulate the inconsistent expansions of the matrix and the particles due to the difference of the coefficients of thermal expansion. Micromechanical stress and strain fields are calculated due to the combination of applied external loads and the prescribed thermal mismatch eigenstrains. For all of the above models, ensemble-volume averaging procedures are employed to derive the effective yield function of the composites. Numerical simulations are performed to analyze the effects of various parameters and several good agreements between our model's predictions and experimental results are obtained. It should be mentioned that all of expressions in the frameworks are explicitly derived and these analytical results are easy to be adopted in other related investigations.
NASA Astrophysics Data System (ADS)
Kang, D.; Apel, W. D.; Arteaga-Velazquez, J. C.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Engler, J.; Finger, M.; Fuchs, B.; Fuhrmann, D.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huber, D.; Huege, T.; Kampert, K.-H.; Klages, H. O.; Link, K.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Mayer, H. J.; Melissas, M.; Milke, J.; Morello, C.; Oehlschläger, J.; Ostapchenko, S.; Palmieri, N.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schoo, S.; Schroder, F.; Sima, O.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Wommer, M.; Zabierowski, J.
2013-02-01
KASCADE-Grande is a large detector array for observations of the energy spectrum as well as the chemical composition of cosmic ray air showers up to primary energies of 1 EeV. The multi-detector arrangement allows to measure the electromagnetic and muonic components for individual air showers. In this analysis, the reconstruction of the all-particle energy spectrum is based on the size spectra of the charged particle component. The energy is calibrated by using Monte Carlo simulations performed with CORSIKA and high-energy interaction models QGSJet, EPOS and SIBYLL. In all cases FLUKA has been used as low-energy interaction model. In this contribution the resulting spectra by means of different hadronic interaction models will be compared and discussed.
Research on mining truck vibration control based on particle damping
NASA Astrophysics Data System (ADS)
Liming, Song; Wangqiang, Xiao; Zeguang, Li; Haiquan, Guo; Zhe, Yang
2018-03-01
More and more attentions were got by people about the research on mining truck driving comfort. As the vibration transfer terminal, cab is one of the important part of mining truck vibration control. In this paper, based on particle damping technology and its application characteristics, through the discrete element modeling, DEM & FEM coupling simulation and analysis, lab test verification and actual test in the truck, particle damping technology was successfully used in driver’s seat base of mining truck, cab vibration was reduced obviously, meanwhile applied research and method of particle damping technology in mining truck vibration control were provided.
Price Formation Based on Particle-Cluster Aggregation
NASA Astrophysics Data System (ADS)
Wang, Shijun; Zhang, Changshui
In the present work, we propose a microscopic model of financial markets based on particle-cluster aggregation on a two-dimensional small-world information network in order to simulate the dynamics of the stock markets. "Stylized facts" of the financial market time series, such as fat-tail distribution of returns, volatility clustering and multifractality, are observed in the model. The results of the model agree with empirical data taken from historical records of the daily closures of the NYSE composite index.
Boss, Emmanuel; Slade, Wayne; Hill, Paul
2009-05-25
Marine aggregates, agglomerations of particles and dissolved materials, are an important particulate pool in aquatic environments, but their optical properties are not well understood. To improve understanding of the optical properties of aggregates, two related studies are presented. In the first, an in situ manipulation experiment is described, in which beam attenuation of undisturbed and sheared suspensions are compared. Results show that in the sheared treatment bulk particle size decreases and beam attenuation increases, consistent with the hypothesis that a significant fraction of mass in suspension is contained in fragile aggregates. Interestingly, the magnitude of increase in beam attenuation is less than expected if the aggregates are modeled as solid spheres. Motivated by this result, a second study is presented, in which marine aggregates are modeled to assess how the beam attenuation of aggregates differs from that of their constituent particles and from solid particles of the same mass. The model used is based on that of Latimer [Appl. Opt. 24, 3231 (1985)] and mass specific attenuation is compared with that based on homogeneous and solid particles, the standard model for aquatic particles. In the modeling we use recent research relating size and solid fraction of aquatic aggregates. In contrast with Mie theory, this model provides a rather size-insensitive mass specific attenuation for most relevant sizes. This insensitivity is consistent with the observations that mass specific beam-attenuation of marine particles is in the range 0.2-0.6m(2)/gr despite large variability in size distribution and composition across varied aquatic environments.
Modeling magnetic field amplification in nonlinear diffusive shock acceleration
NASA Astrophysics Data System (ADS)
Vladimirov, Andrey
2009-02-01
This research was motivated by the recent observations indicating very strong magnetic fields at some supernova remnant shocks, which suggests in-situ generation of magnetic turbulence. The dissertation presents a numerical model of collisionless shocks with strong amplification of stochastic magnetic fields, self-consistently coupled to efficient shock acceleration of charged particles. Based on a Monte Carlo simulation of particle transport and acceleration in nonlinear shocks, the model describes magnetic field amplification using the state-of-the-art analytic models of instabilities in magnetized plasmas in the presence of non-thermal particle streaming. The results help one understand the complex nonlinear connections between the thermal plasma, the accelerated particles and the stochastic magnetic fields in strong collisionless shocks. Also, predictions regarding the efficiency of particle acceleration and magnetic field amplification, the impact of magnetic field amplification on the maximum energy of accelerated particles, and the compression and heating of the thermal plasma by the shocks are presented. Particle distribution functions and turbulence spectra derived with this model can be used to calculate the emission of observable nonthermal radiation.
Notter, Dominic A
2015-09-01
Particulate matter (PM) causes severe damage to human health globally. Airborne PM is a mixture of solid and liquid droplets suspended in air. It consists of organic and inorganic components, and the particles of concern range in size from a few nanometers to approximately 10μm. The complexity of PM is considered to be the reason for the poor understanding of PM and may also be the reason why PM in environmental impact assessment is poorly defined. Currently, life cycle impact assessment is unable to differentiate highly toxic soot particles from relatively harmless sea salt. The aim of this article is to present a new impact assessment for PM where the impact of PM is modeled based on particle physico-chemical properties. With the new method, 2781 characterization factors that account for particle mass, particle number concentration, particle size, chemical composition and solubility were calculated. Because particle sizes vary over four orders of magnitudes, a sound assessment of PM requires that the exposure model includes deposition of particles in the lungs and that the fate model includes coagulation as a removal mechanism for ultrafine particles. The effects model combines effects from particle size, solubility and chemical composition. The first results from case studies suggest that PM that stems from emissions generally assumed to be highly toxic (e.g. biomass combustion and fossil fuel combustion) might lead to results that are similar compared with an assessment of PM using established methods. However, if harmless PM emissions are emitted, established methods enormously overestimate the damage. The new impact assessment allows a high resolution of the damage allocatable to different size fractions or chemical components. This feature supports a more efficient optimization of processes and products when combating air pollution. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Stock, Eduardo Velasco; da Silva, Roberto; Fernandes, H. A.
2017-07-01
In this paper, we propose a stochastic model which describes two species of particles moving in counterflow. The model generalizes the theoretical framework that describes the transport in random systems by taking into account two different scenarios: particles can work as mobile obstacles, whereas particles of one species move in the opposite direction to the particles of the other species, or particles of a given species work as fixed obstacles remaining in their places during the time evolution. We conduct a detailed study about the statistics concerning the crossing time of particles, as well as the effects of the lateral transitions on the time required to the system reaches a state of complete geographic separation of species. The spatial effects of jamming are also studied by looking into the deformation of the concentration of particles in the two-dimensional corridor. Finally, we observe in our study the formation of patterns of lanes which reach the steady state regardless of the initial conditions used for the evolution. A similar result is also observed in real experiments involving charged colloids motion and simulations of pedestrian dynamics based on Langevin equations, when periodic boundary conditions are considered (particles counterflow in a ring symmetry). The results obtained through Monte Carlo simulations and numerical integrations are in good agreement with each other. However, differently from previous studies, the dynamics considered in this work is not Newton-based, and therefore, even artificial situations of self-propelled objects should be studied in this first-principles modeling.
Multiscale Micromechanical Modeling of Polymer/Clay Nanocomposites and the Effective Clay Particle
NASA Astrophysics Data System (ADS)
Sheng, Nuo; Boyce, Mary C.; Parks, David M.; Manovitch, Oleg; Rutledge, Gregory C.; Lee, Hojun; McKinley, Gareth H.
2003-03-01
Polymer/clay nanocomposites have been observed to exhibit enhanced mechanical properties at low weight fractions (Wp) of clay. Continuum-based composite modeling reveals that the enhanced properties are strongly dependent on particular features of the second-phase ¡°particles¡+/-; in particular, the particle volume fraction (fp), the particle aspect ratio (L/t), and the ratio of particle mechanical properties to those of the matrix. However, these important aspects of as-processed nanoclay composites have yet to be consistently and accurately defined. A multiscale modeling strategy was developed to account for the hierarchical morphology of the nanocomposite: at a lengthscale of thousands of microns, the structure is one of high aspect ratio particles within a matrix; at the lengthscale of microns, the clay particle structure is either (a) exfoliated clay sheets of nanometer level thickness or (b) stacks of parallel clay sheets separated from one another by interlayer galleries of nanometer level height. Here, quantitative structural parameters extracted from XRD patterns and TEM micrographs are used to determine geometric features of the as-processed clay ¡°particles¡+/-, including L/t and the ratio of fp to Wp. These geometric features, together with estimates of silicate lamina stiffness obtained from molecular dynamics simulations, provide a basis for modeling effective mechanical properties of the clay particle. The structure-based predictions of the macroscopic elastic modulus of the nanocomposite as a function of clay weight fraction are in excellent agreement with experimental data. The adopted methodology offers promise for study of related properties in polymer/clay nanocomposites.
The structure of particle-laden jets and nonevaporating sprays
NASA Technical Reports Server (NTRS)
Shuen, J. S.; Solomon, A. S. P.; Zhang, Q. F.; Faeth, G. M.
1983-01-01
Mean and fluctuating gas velocities, liquid mass fluxes and drop sizes were in nonevaporating sprays. These results, as well as existing measurements in solid particle-laden jets, were used to evaluate models of these processes. The following models were considered: (1) a locally homogeneous flow (LHF) model, where slip between the phases was neglected; (2) a deterministic separated flow (DSF) model, where slip was considered but effects of particle dispersion by turbulence were ignored; and (3) a stochastic separated flow (SSF) model, where effects of interphase slip and turbulent dispersion were considered using random-walk computations for particle motion. The LHF and DSF models did not provide very satisfactory predictions over the present data base. In contrast, the SSF model performed reasonably well - including conditions in nonevaporating sprays where enhanced dispersion of particles by turbulence caused the spray to spread more rapidly than single-phase jets for comparable conditions. While these results are encouraging, uncertainties in initial conditions limit the reliability of the evaluation. Current work is seeking to eliminate this deficiency.
Nimmo, J.R.; Herkelrath, W.N.; Laguna, Luna A.M.
2007-01-01
Numerous models are in widespread use for the estimation of soil water retention from more easily measured textural data. Improved models are needed for better prediction and wider applicability. We developed a basic framework from which new and existing models can be derived to facilitate improvements. Starting from the assumption that every particle has a characteristic dimension R associated uniquely with a matric pressure ?? and that the form of the ??-R relation is the defining characteristic of each model, this framework leads to particular models by specification of geometric relationships between pores and particles. Typical assumptions are that particles are spheres, pores are cylinders with volume equal to the associated particle volume times the void ratio, and that the capillary inverse proportionality between radius and matric pressure is valid. Examples include fixed-pore-shape and fixed-pore-length models. We also developed alternative versions of the model of Arya and Paris that eliminate its interval-size dependence and other problems. The alternative models are calculable by direct application of algebraic formulas rather than manipulation of data tables and intermediate results, and they easily combine with other models (e.g., incorporating structural effects) that are formulated on a continuous basis. Additionally, we developed a family of models based on the same pore geometry as the widely used unsaturated hydraulic conductivity model of Mualem. Predictions of measurements for different suitable media show that some of the models provide consistently good results and can be chosen based on ease of calculations and other factors. ?? Soil Science Society of America. All rights reserved.
Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation
Ruotsalainen, Laura; Kirkko-Jaakkola, Martti; Rantanen, Jesperi; Mäkelä, Maija
2018-01-01
The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM) and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU), sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS) sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF), which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf) in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is tested via two experiments, one at a university’s premises and another in realistic tactical conditions. The results show significant improvement on the horizontal localization when the measurement errors are carefully modelled and their inclusion into the particle filtering implementation correctly realized. PMID:29443918
Hadwin, Paul J; Peterson, Sean D
2017-04-01
The Bayesian framework for parameter inference provides a basis from which subject-specific reduced-order vocal fold models can be generated. Previously, it has been shown that a particle filter technique is capable of producing estimates and associated credibility intervals of time-varying reduced-order vocal fold model parameters. However, the particle filter approach is difficult to implement and has a high computational cost, which can be barriers to clinical adoption. This work presents an alternative estimation strategy based upon Kalman filtering aimed at reducing the computational cost of subject-specific model development. The robustness of this approach to Gaussian and non-Gaussian noise is discussed. The extended Kalman filter (EKF) approach is found to perform very well in comparison with the particle filter technique at dramatically lower computational cost. Based upon the test cases explored, the EKF is comparable in terms of accuracy to the particle filter technique when greater than 6000 particles are employed; if less particles are employed, the EKF actually performs better. For comparable levels of accuracy, the solution time is reduced by 2 orders of magnitude when employing the EKF. By virtue of the approximations used in the EKF, however, the credibility intervals tend to be slightly underpredicted.
Evaluation of new collision-pair selection models in DSMC
NASA Astrophysics Data System (ADS)
Akhlaghi, Hassan; Roohi, Ehsan
2017-10-01
The current paper investigates new collision-pair selection procedures in a direct simulation Monte Carlo (DSMC) method. Collision partner selection based on the random procedure from nearest neighbor particles and deterministic selection of nearest neighbor particles have already been introduced as schemes that provide accurate results in a wide range of problems. In the current research, new collision-pair selections based on the time spacing and direction of the relative movement of particles are introduced and evaluated. Comparisons between the new and existing algorithms are made considering appropriate test cases including fluctuations in homogeneous gas, 2D equilibrium flow, and Fourier flow problem. Distribution functions for number of particles and collisions in cell, velocity components, and collisional parameters (collision separation, time spacing, relative velocity, and the angle between relative movements of particles) are investigated and compared with existing analytical relations for each model. The capability of each model in the prediction of the heat flux in the Fourier problem at different cell numbers, numbers of particles, and time steps is examined. For new and existing collision-pair selection schemes, the effect of an alternative formula for the number of collision-pair selections and avoiding repetitive collisions are investigated via the prediction of the Fourier heat flux. The simulation results demonstrate the advantages and weaknesses of each model in different test cases.
NASA Astrophysics Data System (ADS)
Islam, Mohammad S.; Saha, Suvash C.; Sauret, Emilie; Gu, Y. T.; Molla, Md Mamun
2017-06-01
Diesel exhaust particulates matter (DEPM) is a compound mixture of gasses and fine particles that contain more than 40 toxic air pollutants including benzene, formaldehyde, and nitrogen oxides. Exposure of DEPM to human lung airway during respiratory inhalation causes severe health hazards like diverse pulmonary diseases. This paper studies the DEPM transport and deposition in upper three generations of the realistic lung airways. A 3-D digital airway bifurcation model is constructed from the computerized tomography (CT) scan data of a healthy adult man. The Euler-Lagrange approach is used to solve the continuum and disperse phases of the calculation. Local averaged Navier-Stokes equations are solved to calculate the transport of the continuum phase. Lagrangian based Discrete Phase Model (DPM) is used to investigate the particle transport and deposition in the current anatomical model. The effects of size specific monodispersed particles on deposition are extensively investigated during different breathing pattern. The numerical results illustrate that particle diameter and breathing pattern have a substantial impact on particles transport and deposition in the tracheobronchial airways. The present realistic bifurcation model also depicts a new deposition hot spot which could advance the understanding of the therapeutic drug delivery system to the specific position of the respiratory airways.
NASA Technical Reports Server (NTRS)
Lee, I. Y.; Haenel, G.; Pruppacher, H. R.
1980-01-01
The time variation in size of aerosol particles growing by condensation is studied numerically by means of an air parcel model which allows entrainment of air and aerosol particles. Particles of four types of aerosols typically occurring in atmospheric air masses were considered. The present model circumvents any assumption about the size distribution and chemical composition of the aerosol particles by basing the aerosol particle growth on actually observed size distributions and on observed amounts of water taken up under equilibrium by a deposit of the aerosol particles. Characteristic differences in the drop size distribution, liquid water content and supersaturation were found for the clouds which evolved from the four aerosol types considered.
NASA Astrophysics Data System (ADS)
Paasche, H.; Tronicke, J.
2012-04-01
In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.
2009-01-01
proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and...radiation transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the...same dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6
2009-07-05
proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and Heavy...transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the input...dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6 (PARMA
Models of filter-based particle light absorption measurements
NASA Astrophysics Data System (ADS)
Hamasha, Khadeejeh M.
Light absorption by aerosol is very important in the visible, near UN, and near I.R region of the electromagnetic spectrum. Aerosol particles in the atmosphere have a great influence on the flux of solar energy, and also impact health in a negative sense when they are breathed into lungs. Aerosol absorption measurements are usually performed by filter-based methods that are derived from the change in light transmission through a filter where particles have been deposited. These methods suffer from interference between light-absorbing and light-scattering aerosol components. The Aethalometer is the most commonly used filter-based instrument for aerosol light absorption measurement. This dissertation describes new understanding of aerosol light absorption obtained by the filter method. The theory uses a multiple scattering model for the combination of filter and particle optics. The theory is evaluated using Aethalometer data from laboratory and ambient measurements in comparison with photoacoustic measurements of aerosol light absorption. Two models were developed to calculate aerosol light absorption coefficients from the Aethalometer data, and were compared to the in-situ aerosol light absorption coefficients. The first is an approximate model and the second is a "full" model. In the approximate model two extreme cases of aerosol optics were used to develop a model-based calibration scheme for the 7-wavelength Aethalometer. These cases include those of very strong scattering aerosols (Ammonium sulfate sample) and very absorbing aerosols (kerosene soot sample). The exponential behavior of light absorption in the strong multiple scattering limit is shown to be the square root of the total absorption optical depth rather than linear with optical depth as is commonly assumed with Beer's law. 2-stream radiative transfer theory was used to develop the full model to calculate the aerosol light absorption coefficients from the Aethalometer data. This comprehensive model allows for studying very general cases of particles of various sizes embedded on arbitrary filter media. Application of this model to the Reno Aerosol Optics Study (Laboratory data) shows that the aerosol light absorption coefficients are about half of the Aethalometer attenuation coefficients, and there is a reasonable agreement between the model calculated absorption coefficients at 521 nm and the measured photoacoustic absorption coefficients at 532 nm. For ambient data obtained during the Las Vegas study, it shows that the model absorption coefficients at 521 nm are larger than the photoacoustic coefficients at 532 nm. Use of the 2-stream model shows that particle penetration depth into the filter has a strong influence on the interpretation of filter-based aerosol light absorption measurements. This is likely explanation for the difference found between model results for filter-based aerosol light absorption and those from photoacoustic measurements for ambient and laboratory aerosol.
NASA Astrophysics Data System (ADS)
Laptev, A. G.; Basharov, M. M.
2018-05-01
The problem of modeling turbulent transfer of finely dispersed particles in liquids has been considered. An approach is used where the transport of particles is represented in the form of a variety of the diffusion process with the coefficient of turbulent transfer to the wall. Differential equations of transfer are written for different cases, and a solution of the cell model is obtained for calculating the efficiency of separation in a channel. Based on the theory of turbulent transfer of particles and of the boundary layer model, an expression has been obtained for calculating the rate of turbulent deposition of finely dispersed particles. The application of this expression in determining the efficiency of physical coagulation of emulsions in different channels and on the surface of chaotic packings is shown.
NASA Astrophysics Data System (ADS)
Laptev, A. G.; Basharov, M. M.
2018-03-01
The problem of modeling turbulent transfer of finely dispersed particles in liquids has been considered. An approach is used where the transport of particles is represented in the form of a variety of the diffusion process with the coefficient of turbulent transfer to the wall. Differential equations of transfer are written for different cases, and a solution of the cell model is obtained for calculating the efficiency of separation in a channel. Based on the theory of turbulent transfer of particles and of the boundary layer model, an expression has been obtained for calculating the rate of turbulent deposition of finely dispersed particles. The application of this expression in determining the efficiency of physical coagulation of emulsions in different channels and on the surface of chaotic packings is shown.
Representative volume element model of lithium-ion battery electrodes based on X-ray nano-tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashkooli, Ali Ghorbani; Amirfazli, Amir; Farhad, Siamak
For this, a new model that keeps all major advantages of the single-particle model of lithium-ion batteries (LIBs) and includes three-dimensional structure of the electrode was developed. Unlike the single spherical particle, this model considers a small volume element of an electrode, called the Representative Volume Element (RVE), which represent the real electrode structure. The advantages of using RVE as the model geometry was demonstrated for a typical LIB electrode consisting of nano-particle LiFePO 4 (LFP) active material. The three-dimensional morphology of the LFP electrode was reconstructed using a synchrotron X-ray nano-computed tomography at the Advanced Photon Source of themore » Argonne National. A 27 μm 3 cube from reconstructed structure was chosen as the RVE for the simulation purposes. The model was employed to predict the voltage curve in a half-cell during galvanostatic operations and validated with experimental data. The simulation results showed that the distribution of lithium inside the electrode microstructure is very different from the results obtained based on the single-particle model. The range of lithium concentration is found to be much greater, successfully illustrates the effect of microstructure heterogeneity.« less
Representative volume element model of lithium-ion battery electrodes based on X-ray nano-tomography
Kashkooli, Ali Ghorbani; Amirfazli, Amir; Farhad, Siamak; ...
2017-01-28
For this, a new model that keeps all major advantages of the single-particle model of lithium-ion batteries (LIBs) and includes three-dimensional structure of the electrode was developed. Unlike the single spherical particle, this model considers a small volume element of an electrode, called the Representative Volume Element (RVE), which represent the real electrode structure. The advantages of using RVE as the model geometry was demonstrated for a typical LIB electrode consisting of nano-particle LiFePO 4 (LFP) active material. The three-dimensional morphology of the LFP electrode was reconstructed using a synchrotron X-ray nano-computed tomography at the Advanced Photon Source of themore » Argonne National. A 27 μm 3 cube from reconstructed structure was chosen as the RVE for the simulation purposes. The model was employed to predict the voltage curve in a half-cell during galvanostatic operations and validated with experimental data. The simulation results showed that the distribution of lithium inside the electrode microstructure is very different from the results obtained based on the single-particle model. The range of lithium concentration is found to be much greater, successfully illustrates the effect of microstructure heterogeneity.« less
Wake-Driven Dynamics of Finite-Sized Buoyant Spheres in Turbulence
NASA Astrophysics Data System (ADS)
Mathai, Varghese; Prakash, Vivek N.; Brons, Jon; Sun, Chao; Lohse, Detlef
2015-09-01
Particles suspended in turbulent flows are affected by the turbulence and at the same time act back on the flow. The resulting coupling can give rise to rich variability in their dynamics. Here we report experimental results from an investigation of finite-sized buoyant spheres in turbulence. We find that even a marginal reduction in the particle's density from that of the fluid can result in strong modification of its dynamics. In contrast to classical spatial filtering arguments and predictions of particle models, we find that the particle acceleration variance increases with size. We trace this reversed trend back to the growing contribution from wake-induced forces, unaccounted for in current particle models in turbulence. Our findings highlight the need for improved multiphysics based models that account for particle wake effects for a faithful representation of buoyant-sphere dynamics in turbulence.
NASA Astrophysics Data System (ADS)
Hosseinzadeh-Nik, Zahra; Regele, Jonathan D.
2015-11-01
Dense compressible particle-laden flow, which has a complex nature, exists in various engineering applications. Shock waves impacting a particle cloud is a canonical problem to investigate this type of flow. It has been demonstrated that large flow unsteadiness is generated inside the particle cloud from the flow induced by the shock passage. It is desirable to develop models for the Reynolds stress to capture the energy contained in vortical structures so that volume-averaged models with point particles can be simulated accurately. However, the previous work used Euler equations, which makes the prediction of vorticity generation and propagation innacurate. In this work, a fully resolved two dimensional (2D) simulation using the compressible Navier-Stokes equations with a volume penalization method to model the particles has been performed with the parallel adaptive wavelet-collocation method. The results still show large unsteadiness inside and downstream of the particle cloud. A 1D model is created for the unclosed terms based upon these 2D results. The 1D model uses a two-phase simple low dissipation AUSM scheme (TSLAU) developed by coupled with the compressible two phase kinetic energy equation.
Modeling and experiments of the adhesion force distribution between particles and a surface.
You, Siming; Wan, Man Pun
2014-06-17
Due to the existence of surface roughness in real surfaces, the adhesion force between particles and the surface where the particles are deposited exhibits certain statistical distributions. Despite the importance of adhesion force distribution in a variety of applications, the current understanding of modeling adhesion force distribution is still limited. In this work, an adhesion force distribution model based on integrating the root-mean-square (RMS) roughness distribution (i.e., the variation of RMS roughness on the surface in terms of location) into recently proposed mean adhesion force models was proposed. The integration was accomplished by statistical analysis and Monte Carlo simulation. A series of centrifuge experiments were conducted to measure the adhesion force distributions between polystyrene particles (146.1 ± 1.99 μm) and various substrates (stainless steel, aluminum and plastic, respectively). The proposed model was validated against the measured adhesion force distributions from this work and another previous study. Based on the proposed model, the effect of RMS roughness distribution on the adhesion force distribution of particles on a rough surface was explored, showing that both the median and standard deviation of adhesion force distribution could be affected by the RMS roughness distribution. The proposed model could predict both van der Waals force and capillary force distributions and consider the multiscale roughness feature, greatly extending the current capability of adhesion force distribution prediction.
Meso-Scale Modeling of Spall in a Heterogeneous Two-Phase Material
DOE Office of Scientific and Technical Information (OSTI.GOV)
Springer, Harry Keo
2008-07-11
The influence of the heterogeneous second-phase particle structure and applied loading conditions on the ductile spall response of a model two-phase material was investigated. Quantitative metallography, three-dimensional (3D) meso-scale simulations (MSS), and small-scale spall experiments provided the foundation for this study. Nodular ductile iron (NDI) was selected as the model two-phase material for this study because it contains a large and readily identifiable second- phase particle population. Second-phase particles serve as the primary void nucleation sites in NDI and are, therefore, central to its ductile spall response. A mathematical model was developed for the NDI second-phase volume fraction that accountedmore » for the non-uniform particle size and spacing distributions within the framework of a length-scale dependent Gaussian probability distribution function (PDF). This model was based on novel multiscale sampling measurements. A methodology was also developed for the computer generation of representative particle structures based on their mathematical description, enabling 3D MSS. MSS were used to investigate the effects of second-phase particle volume fraction and particle size, loading conditions, and physical domain size of simulation on the ductile spall response of a model two-phase material. MSS results reinforce existing model predictions, where the spall strength metric (SSM) logarithmically decreases with increasing particle volume fraction. While SSM predictions are nearly independent of applied load conditions at lower loading rates, which is consistent with previous studies, loading dependencies are observed at higher loading rates. There is also a logarithmic decrease in SSM for increasing (initial) void size, as well. A model was developed to account for the effects of loading rate, particle size, matrix sound-speed, and, in the NDI-specific case, the probabilistic particle volume fraction model. Small-scale spall experiments were designed and executed for the purpose of validating closely-coupled 3D MSS. While the spall strength is nearly independent of specimen thickness, the fragment morphology varies widely. Detailed MSS demonstrate that the interactions between the tensile release waves are altered by specimen thickness and that these interactions are primarily responsible for fragment formation. MSS also provided insights on the regional amplification of damage, which enables the development of predictive void evolution models.« less
ERIC Educational Resources Information Center
Ehrman, Sheryl H.; Castellanos, Patricia; Dwivedi, Vivek; Diemer, R. Bertrum
2007-01-01
A particle technology design problem incorporating population balance modeling was developed and assigned to senior and first-year graduate students in a Particle Science and Technology course. The problem focused on particle collection, with a pipeline agglomerator, Cyclone, and baghouse comprising the collection system. The problem was developed…
Airflow and Particle Transport Through Human Airways: A Systematic Review
NASA Astrophysics Data System (ADS)
Kharat, S. B.; Deoghare, A. B.; Pandey, K. M.
2017-08-01
This paper describes review of the relevant literature about two phase analysis of air and particle flow through human airways. An emphasis of the review is placed on elaborating the steps involved in two phase analysis, which are Geometric modelling methods and Mathematical models. The first two parts describes various approaches that are followed for constructing an Airway model upon which analysis are conducted. Broad two categories of geometric modelling viz. Simplified modelling and Accurate modelling using medical scans are discussed briefly. Ease and limitations of simplified models, then examples of CT based models are discussed. In later part of the review different mathematical models implemented by researchers for analysis are briefed. Mathematical models used for Air and Particle phases are elaborated separately.
Maria Jose, Gonzalez Torres; Jürgen, Henniger
2018-01-01
In order to expand the Monte Carlo transport program AMOS to particle therapy applications, the ion module is being developed in the radiation physics group (ASP) at the TU Dresden. This module simulates the three main interactions of ions in matter for the therapy energy range: elastic scattering, inelastic collisions and nuclear reactions. The simulation of the elastic scattering is based on the Binary Collision Approximation and the inelastic collisions on the Bethe-Bloch theory. The nuclear reactions, which are the focus of the module, are implemented according to a probabilistic-based model developed in the group. The developed model uses probability density functions to sample the occurrence of a nuclear reaction given the initial energy of the projectile particle as well as the energy at which this reaction will take place. The particle is transported until the reaction energy is reached and then the nuclear reaction is simulated. This approach allows a fast evaluation of the nuclear reactions. The theory and application of the proposed model will be addressed in this presentation. The results of the simulation of a proton beam colliding with tissue will also be presented. Copyright © 2017.
Evolution of Particle Size Distributions in Fragmentation Over Time
NASA Astrophysics Data System (ADS)
Charalambous, C. A.; Pike, W. T.
2013-12-01
We present a new model of fragmentation based on a probabilistic calculation of the repeated fracture of a particle population. The resulting continuous solution, which is in closed form, gives the evolution of fragmentation products from an initial block, through a scale-invariant power-law relationship to a final comminuted powder. Models for the fragmentation of particles have been developed separately in mainly two different disciplines: the continuous integro-differential equations of batch mineral grinding (Reid, 1965) and the fractal analysis of geophysics (Turcotte, 1986) based on a discrete model with a single probability of fracture. The first gives a time-dependent development of the particle-size distribution, but has resisted a closed-form solution, while the latter leads to the scale-invariant power laws, but with no time dependence. Bird (2009) recently introduced a bridge between these two approaches with a step-wise iterative calculation of the fragmentation products. The development of the particle-size distribution occurs with discrete steps: during each fragmentation event, the particles will repeatedly fracture probabilistically, cascading down the length scales to a final size distribution reached after all particles have failed to further fragment. We have identified this process as the equivalent to a sequence of trials for each particle with a fixed probability of fragmentation. Although the resulting distribution is discrete, it can be reformulated as a continuous distribution in maturity over time and particle size. In our model, Turcotte's power-law distribution emerges at a unique maturation index that defines a regime boundary. Up to this index, the fragmentation is in an erosional regime with the initial particle size setting the scaling. Fragmentation beyond this index is in a regime of comminution with rebreakage of the particles down to the size limit of fracture. The maturation index can increment continuously, for example under grinding conditions, or as discrete steps, such as with impact events. In both cases our model gives the energy associated with the fragmentation in terms of the developing surface area of the population. We show the agreement of our model to the evolution of particle size distributions associated with episodic and continuous fragmentation and how the evolution of some popular fractals may be represented using this approach. C. A. Charalambous and W. T. Pike (2013). Multi-Scale Particle Size Distributions of Mars, Moon and Itokawa based on a time-maturation dependent fragmentation model. Abstract Submitted to the AGU 46th Fall Meeting. Bird, N. R. A., Watts, C. W., Tarquis, A. M., & Whitmore, A. P. (2009). Modeling dynamic fragmentation of soil. Vadose Zone Journal, 8(1), 197-201. Reid, K. J. (1965). A solution to the batch grinding equation. Chemical Engineering Science, 20(11), 953-963. Turcotte, D. L. (1986). Fractals and fragmentation. Journal of Geophysical Research: Solid Earth 91(B2), 1921-1926.
Investigation of Particle Deposition in Internal Cooling Cavities of a Nozzle Guide Vane
NASA Astrophysics Data System (ADS)
Casaday, Brian Patrick
Experimental and computational studies were conducted regarding particle deposition in the internal film cooling cavities of nozzle guide vanes. An experimental facility was fabricated to simulate particle deposition on an impingement liner and upstream surface of a nozzle guide vane wall. The facility supplied particle-laden flow at temperatures up to 1000°F (540°C) to a simplified impingement cooling test section. The heated flow passed through a perforated impingement plate and impacted on a heated flat wall. The particle-laden impingement jets resulted in the buildup of deposit cones associated with individual impingement jets. The deposit growth rate increased with increasing temperature and decreasing impinging velocities. For some low flow rates or high flow temperatures, the deposit cones heights spanned the entire gap between the impingement plate and wall, and grew through the impingement holes. For high flow rates, deposit structures were removed by shear forces from the flow. At low temperatures, deposit formed not only as individual cones, but as ridges located at the mid-planes between impinging jets. A computational model was developed to predict the deposit buildup seen in the experiments. The test section geometry and fluid flow from the experiment were replicated computationally and an Eulerian-Lagrangian particle tracking technique was employed. Several particle sticking models were employed and tested for adequacy. Sticking models that accurately predicted locations and rates in external deposition experiments failed to predict certain structures or rates seen in internal applications. A geometry adaptation technique was employed and the effect on deposition prediction was discussed. A new computational sticking model was developed that predicts deposition rates based on the local wall shear. The growth patterns were compared to experiments under different operating conditions. Of all the sticking models employed, the model based on wall shear, in conjunction with geometry adaptation, proved to be the most accurate in predicting the forms of deposit growth. It was the only model that predicted the changing deposition trends based on flow temperature or Reynolds number, and is recommended for further investigation and application in the modeling of deposition in internal cooling cavities.
NASA Astrophysics Data System (ADS)
Stevens, R. G.; Lonsdale, C. L.; Brock, C. A.; Reed, M. K.; Crawford, J. H.; Holloway, J. S.; Ryerson, T. B.; Huey, L. G.; Nowak, J. B.; Pierce, J. R.
2012-04-01
New-particle formation in the plumes of coal-fired power plants and other anthropogenic sulphur sources may be an important source of particles in the atmosphere. It remains unclear, however, how best to reproduce this formation in global and regional aerosol models with grid-box lengths that are 10s of kilometres and larger. The predictive power of these models is thus limited by the resultant uncertainties in aerosol size distributions. In this presentation, we focus on sub-grid sulphate aerosol processes within coal-fired power plant plumes: the sub-grid oxidation of SO2 with condensation of H2SO4 onto newly-formed and pre-existing particles. Based on the results of the System for Atmospheric Modelling (SAM), a Large-Eddy Simulation/Cloud-Resolving Model (LES/CRM) with online TwO Moment Aerosol Sectional (TOMAS) microphysics, we develop a computationally efficient, but physically based, parameterization that predicts the characteristics of aerosol formed within coal-fired power plant plumes based on parameters commonly available in global and regional-scale models. Given large-scale mean meteorological parameters, emissions from the power plant, mean background condensation sink, and the desired distance from the source, the parameterization will predict the fraction of the emitted SO2 that is oxidized to H2SO4, the fraction of that H2SO4 that forms new particles instead of condensing onto preexisting particles, the median diameter of the newly-formed particles, and the number of newly-formed particles per kilogram SO2 emitted. We perform a sensitivity analysis of these characteristics of the aerosol size distribution to the meteorological parameters, the condensation sink, and the emissions. In general, new-particle formation and growth is greatly reduced during polluted conditions due to the large preexisting aerosol surface area for H2SO4 condensation and particle coagulation. The new-particle formation and growth rates are also a strong function of the amount of sunlight and NOx since both control OH concentrations. Decreases in NOx emissions without simultaneous decreases in SO2 emissions increase new-particle formation and growth due to increased oxidation of SO2. The parameterization we describe here should allow for more accurate predictions of aerosol size distributions and a greater confidence in the effects of aerosols in climate and health studies.
Contagion Shocks in One Dimension
NASA Astrophysics Data System (ADS)
Bertozzi, Andrea L.; Rosado, Jesus; Short, Martin B.; Wang, Li
2015-02-01
We consider an agent-based model of emotional contagion coupled with motion in one dimension that has recently been studied in the computer science community. The model involves movement with a speed proportional to a "fear" variable that undergoes a temporal consensus averaging based on distance to other agents. We study the effect of Riemann initial data for this problem, leading to shock dynamics that are studied both within the agent-based model as well as in a continuum limit. We examine the behavior of the model under distinguished limits as the characteristic contagion interaction distance and the interaction timescale both approach zero. The limiting behavior is related to a classical model for pressureless gas dynamics with "sticky" particles. In comparison, we observe a threshold for the interaction distance vs. interaction timescale that produce qualitatively different behavior for the system - in one case particle paths do not cross and there is a natural Eulerian limit involving nonlocal interactions and in the other case particle paths can cross and one may consider only a kinetic model in the continuum limit.
Mass transfer effect of the stalk contraction-relaxation cycle of Vorticella convallaria
NASA Astrophysics Data System (ADS)
Zhou, Jiazhong; Admiraal, David; Ryu, Sangjin
2014-11-01
Vorticella convallaria is a genus of protozoa living in freshwater. Its stalk contracts and coil pulling the cell body towards the substrate at a remarkable speed, and then relaxes to its extended state much more slowly than the contraction. However, the reason for Vorticella's stalk contraction is still unknown. It is presumed that water flow induced by the stalk contraction-relaxation cycle may augment mass transfer near the substrate. We investigated this hypothesis using an experimental model with particle tracking velocimetry and a computational fluid dynamics model. In both approaches, Vorticella was modeled as a solid sphere translating perpendicular to a solid surface in water. After having been validated by the experimental model and verified by grid convergence index test, the computational model simulated water flow during the cycle based on the measured time course of stalk length changes of Vorticella. Based on the simulated flow field, we calculated trajectories of particles near the model Vorticella, and then evaluated the mass transfer effect of Vorticella's stalk contraction based on the particles' motion. We acknowlege support from Laymann Seed Grant of the University of Nebraska-Lincoln.
A DMA-train for precision measurement of sub-10 nm aerosol dynamics
NASA Astrophysics Data System (ADS)
Stolzenburg, Dominik; Steiner, Gerhard; Winkler, Paul M.
2017-05-01
Measurements of aerosol dynamics in the sub-10 nm size range are crucially important for quantifying the impact of new particle formation onto the global budget of cloud condensation nuclei. Here we present the development and characterization of a differential mobility analyzer train (DMA-train), operating six DMAs in parallel for high-time-resolution particle-size-distribution measurements below 10 nm. The DMAs are operated at six different but fixed voltages and hence sizes, together with six state-of-the-art condensation particle counters (CPCs). Two Airmodus A10 particle size magnifiers (PSM) are used for channels below 2.5 nm while sizes above 2.5 nm are detected by TSI 3776 butanol-based or TSI 3788 water-based CPCs. We report the transfer functions and characteristics of six identical Grimm S-DMAs as well as the calibration of a butanol-based TSI model 3776 CPC, a water-based TSI model 3788 CPC and an Airmodus A10 PSM. We find cutoff diameters similar to those reported in the literature. The performance of the DMA-train is tested with a rapidly changing aerosol of a tungsten oxide particle generator during warmup. Additionally we report a measurement of new particle formation taken during a nucleation event in the CLOUD chamber experiment at CERN. We find that the DMA-train is able to bridge the gap between currently well-established measurement techniques in the cluster-particle transition regime, providing high time resolution and accurate size information of neutral and charged particles even at atmospheric particle concentrations.
NASA Astrophysics Data System (ADS)
Liu, D.; Fu, X.; Liu, X.
2016-12-01
In nature, granular materials exist widely in water bodies. Understanding the fundamentals of solid-liquid two-phase flow, such as turbulent sediment-laden flow, is of importance for a wide range of applications. A coupling method combining computational fluid dynamics (CFD) and discrete element method (DEM) is now widely used for modeling such flows. In this method, when particles are significantly larger than the CFD cells, the fluid field around each particle should be fully resolved. On the other hand, the "unresolved" model is designed for the situation where particles are significantly smaller than the mesh cells. Using "unresolved" model, large amount of particles can be simulated simultaneously. However, there is a gap between these two situations when the size of DEM particles and CFD cell is in the same order of magnitude. In this work, the most commonly used void fraction models are tested with numerical sedimentation experiments. The range of applicability for each model is presented. Based on this, a new void fraction model, i.e., a modified version of "tri-linear" model, is proposed. Particular attention is paid to the smooth function of void fraction in order to avoid numerical instability. The results show good agreement with the experimental data and analytical solution for both single-particle motion and also group-particle motion, indicating great potential of the new void fraction model.
NASA Astrophysics Data System (ADS)
Faroughi, S. A.; Huber, C.
2015-12-01
Crystal settling and bubbles migration in magmas have significant effects on the physical and chemical evolution of magmas. The rate of phase segregation is controlled by the force balance that governs the migration of particles suspended in the melt. The relative velocity of a single particle or bubble in a quiescent infinite fluid (melt) is well characterized; however, the interplay between particles or bubbles in suspensions and emulsions and its effect on their settling/rising velocity remains poorly quantified. We propose a theoretical model for the hindered velocity of non-Brownian emulsions of nondeformable droplets, and suspensions of spherical solid particles in the creeping flow regime. The model is based on three sets of hydrodynamic corrections: two on the drag coefficient experienced by each particle to account for both return flow and Smoluchowski effects and a correction on the mixture rheology to account for nonlocal interactions between particles. The model is then extended for mono-disperse non-spherical solid particles that are randomly oriented. The non-spherical particles are idealized as spheroids and characterized by their aspect ratio. The poly-disperse nature of natural suspensions is then taken into consideration by introducing an effective volume fraction of particles for each class of mono-disperse particles sizes. Our model is tested against new and published experimental data over a wide range of particle volume fraction and viscosity ratios between the constituents of dispersions. We find an excellent agreement between our model and experiments. We also show two significant applications for our model: (1) We demonstrate that hindered settling can increase mineral residence time by up to an order of magnitude in convecting magma chambers. (2) We provide a model to correct for particle interactions in the conventional hydrometer test to estimate the particle size distribution in soils. Our model offers a greatly improved agreement with the results obtained with direct measurement methods such as laser diffraction.
NASA Astrophysics Data System (ADS)
Zhu, S.; Sartelet, K. N.; Seigneur, C.
2015-06-01
The Size-Composition Resolved Aerosol Model (SCRAM) for simulating the dynamics of externally mixed atmospheric particles is presented. This new model classifies aerosols by both composition and size, based on a comprehensive combination of all chemical species and their mass-fraction sections. All three main processes involved in aerosol dynamics (coagulation, condensation/evaporation and nucleation) are included. The model is first validated by comparison with a reference solution and with results of simulations using internally mixed particles. The degree of mixing of particles is investigated in a box model simulation using data representative of air pollution in Greater Paris. The relative influence on the mixing state of the different aerosol processes (condensation/evaporation, coagulation) and of the algorithm used to model condensation/evaporation (bulk equilibrium, dynamic) is studied.
Mathematical analysis of frontal affinity chromatography in particle and membrane configurations.
Tejeda-Mansir, A; Montesinos, R M; Guzmán, R
2001-10-30
The scaleup and optimization of large-scale affinity-chromatographic operations in the recovery, separation and purification of biochemical components is of major industrial importance. The development of mathematical models to describe affinity-chromatographic processes, and the use of these models in computer programs to predict column performance is an engineering approach that can help to attain these bioprocess engineering tasks successfully. Most affinity-chromatographic separations are operated in the frontal mode, using fixed-bed columns. Purely diffusive and perfusion particles and membrane-based affinity chromatography are among the main commercially available technologies for these separations. For a particular application, a basic understanding of the main similarities and differences between particle and membrane frontal affinity chromatography and how these characteristics are reflected in the transport models is of fundamental relevance. This review presents the basic theoretical considerations used in the development of particle and membrane affinity chromatography models that can be applied in the design and operation of large-scale affinity separations in fixed-bed columns. A transport model for column affinity chromatography that considers column dispersion, particle internal convection, external film resistance, finite kinetic rate, plus macropore and micropore resistances is analyzed as a framework for exploring further the mathematical analysis. Such models provide a general realistic description of almost all practical systems. Specific mathematical models that take into account geometric considerations and transport effects have been developed for both particle and membrane affinity chromatography systems. Some of the most common simplified models, based on linear driving-force (LDF) and equilibrium assumptions, are emphasized. Analytical solutions of the corresponding simplified dimensionless affinity models are presented. Particular methods for estimating the parameters that characterize the mass-transfer and adsorption mechanisms in affinity systems are described.
NASA Astrophysics Data System (ADS)
Letu, Husi; Ishimoto, Hiroshi; Riedi, Jerome; Nakajima, Takashi Y.; -Labonnote, Laurent C.; Baran, Anthony J.; Nagao, Takashi M.; Sekiguchi, Miho
2016-09-01
In this study, various ice particle habits are investigated in conjunction with inferring the optical properties of ice clouds for use in the Global Change Observation Mission-Climate (GCOM-C) satellite programme. We develop a database of the single-scattering properties of five ice habit models: plates, columns, droxtals, bullet rosettes, and Voronoi. The database is based on the specification of the Second Generation Global Imager (SGLI) sensor on board the GCOM-C satellite, which is scheduled to be launched in 2017 by the Japan Aerospace Exploration Agency. A combination of the finite-difference time-domain method, the geometric optics integral equation technique, and the geometric optics method is applied to compute the single-scattering properties of the selected ice particle habits at 36 wavelengths, from the visible to the infrared spectral regions. This covers the SGLI channels for the size parameter, which is defined as a single-particle radius of an equivalent volume sphere, ranging between 6 and 9000 µm. The database includes the extinction efficiency, absorption efficiency, average geometrical cross section, single-scattering albedo, asymmetry factor, size parameter of a volume-equivalent sphere, maximum distance from the centre of mass, particle volume, and six nonzero elements of the scattering phase matrix. The characteristics of calculated extinction efficiency, single-scattering albedo, and asymmetry factor of the five ice particle habits are compared. Furthermore, size-integrated bulk scattering properties for the five ice particle habit models are calculated from the single-scattering database and microphysical data. Using the five ice particle habit models, the optical thickness and spherical albedo of ice clouds are retrieved from the Polarization and Directionality of the Earth's Reflectances-3 (POLDER-3) measurements, recorded on board the Polarization and Anisotropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar (PARASOL) satellite. The optimal ice particle habit for retrieving the SGLI ice cloud properties is investigated by adopting the spherical albedo difference (SAD) method. It is found that the SAD is distributed stably due to the scattering angle increases for bullet rosettes with an effective diameter (Deff) of 10 µm and Voronoi particles with Deff values of 10, 60, and 100 µm. It is confirmed that the SAD of small bullet-rosette particles and all sizes of Voronoi particles has a low angular dependence, indicating that a combination of the bullet-rosette and Voronoi models is sufficient for retrieval of the ice cloud's spherical albedo and optical thickness as effective habit models for the SGLI sensor. Finally, SAD analysis based on the Voronoi habit model with moderate particle size (Deff = 60 µm) is compared with the conventional general habit mixture model, inhomogeneous hexagonal monocrystal model, five-plate aggregate model, and ensemble ice particle model. The Voronoi habit model is found to have an effect similar to that found in some conventional models for the retrieval of ice cloud properties from space-borne radiometric observations.
Mesoscopic modeling and parameter estimation of a lithium-ion battery based on LiFePO4/graphite
NASA Astrophysics Data System (ADS)
Jokar, Ali; Désilets, Martin; Lacroix, Marcel; Zaghib, Karim
2018-03-01
A novel numerical model for simulating the behavior of lithium-ion batteries based on LiFePO4(LFP)/graphite is presented. The model is based on the modified Single Particle Model (SPM) coupled to a mesoscopic approach for the LFP electrode. The model comprises one representative spherical particle as the graphite electrode, and N LFP units as the positive electrode. All the SPM equations are retained to model the negative electrode performance. The mesoscopic model rests on non-equilibrium thermodynamic conditions and uses a non-monotonic open circuit potential for each unit. A parameter estimation study is also carried out to identify all the parameters needed for the model. The unknown parameters are the solid diffusion coefficient of the negative electrode (Ds,n), reaction-rate constant of the negative electrode (Kn), negative and positive electrode porosity (εn&εn), initial State-Of-Charge of the negative electrode (SOCn,0), initial partial composition of the LFP units (yk,0), minimum and maximum resistance of the LFP units (Rmin&Rmax), and solution resistance (Rcell). The results show that the mesoscopic model can simulate successfully the electrochemical behavior of lithium-ion batteries at low and high charge/discharge rates. The model also describes adequately the lithiation/delithiation of the LFP particles, however, it is computationally expensive compared to macro-based models.
Permeability model of sintered porous media: analysis and experiments
NASA Astrophysics Data System (ADS)
Flórez Mera, Juan Pablo; Chiamulera, Maria E.; Mantelli, Marcia B. H.
2017-11-01
In this paper, the permeability of porous media fabricated from copper powder sintering process was modeled and measured, aiming the use of the porosity as input parameter for the prediction of the permeability of sintering porous media. An expression relating the powder particle mean diameter with the permeability was obtained, based on an elementary porous media cell, which is physically represented by a duct formed by the arrangement of spherical particles forming a simple or orthorhombic packing. A circular duct with variable section was used to model the fluid flow within the porous media, where the concept of the hydraulic diameter was applied. Thus, the porous is modeled as a converging-diverging duct. The electrical circuit analogy was employed to determine two hydraulic resistances of the cell: based on the Navier-Stokes equation and on the Darcýs law. The hydraulic resistances are compared between themselves and an expression to determine the permeability as function of average particle diameter is obtained. The atomized copper powder was sifted to reduce the size dispersion of the particles. The porosities and permeabilities of sintered media fabricated from powders with particle mean diameters ranging from 20 to 200 microns were measured, by means of the image analysis method and using an experimental apparatus. The permeability data of a porous media, made of copper powder and saturated with distilled water, was used to compare with the permeability model. Permeability literature models, which considers that powder particles have the same diameter and include porosity data as input parameter, were compared with the present model and experimental data. This comparison showed to be quite good.
A 3-D model of tumor progression based on complex automata driven by particle dynamics.
Wcisło, Rafał; Dzwinel, Witold; Yuen, David A; Dudek, Arkadiusz Z
2009-12-01
The dynamics of a growing tumor involving mechanical remodeling of healthy tissue and vasculature is neglected in most of the existing tumor models. This is due to the lack of efficient computational framework allowing for simulation of mechanical interactions. Meanwhile, just these interactions trigger critical changes in tumor growth dynamics and are responsible for its volumetric and directional progression. We describe here a novel 3-D model of tumor growth, which combines particle dynamics with cellular automata concept. The particles represent both tissue cells and fragments of the vascular network. They interact with their closest neighbors via semi-harmonic central forces simulating mechanical resistance of the cell walls. The particle dynamics is governed by both the Newtonian laws of motion and the cellular automata rules. These rules can represent cell life-cycle and other biological interactions involving smaller spatio-temporal scales. We show that our complex automata, particle based model can reproduce realistic 3-D dynamics of the entire system consisting of the tumor, normal tissue cells, blood vessels and blood flow. It can explain phenomena such as the inward cell motion in avascular tumor, stabilization of tumor growth by the external pressure, tumor vascularization due to the process of angiogenesis, trapping of healthy cells by invading tumor, and influence of external (boundary) conditions on the direction of tumor progression. We conclude that the particle model can serve as a general framework for designing advanced multiscale models of tumor dynamics and it is very competitive to the modeling approaches presented before.
Campbell, Jerry; Franzen, Allison; Van Landingham, Cynthia; Lumpkin, Michael; Crowell, Susan; Meredith, Clive; Loccisano, Anne; Gentry, Robinan; Clewell, Harvey
2016-01-01
Abstract Benzo[a]pyrene (BaP) is a by-product of incomplete combustion of fossil fuels and plant/wood products, including tobacco. A physiologically based pharmacokinetic (PBPK) model for BaP for the rat was extended to simulate inhalation exposures to BaP in rats and humans including particle deposition and dissolution of absorbed BaP and renal elimination of 3-hydroxy benzo[a]pyrene (3-OH BaP) in humans. The clearance of particle-associated BaP from lung based on existing data in rats and dogs suggest that the process is bi-phasic. An initial rapid clearance was represented by BaP released from particles followed by a slower first-order clearance that follows particle kinetics. Parameter values for BaP-particle dissociation were estimated using inhalation data from isolated/ventilated/perfused rat lungs and optimized in the extended inhalation model using available rat data. Simulations of acute inhalation exposures in rats identified specific data needs including systemic elimination of BaP metabolites, diffusion-limited transfer rates of BaP from lung tissue to blood and the quantitative role of macrophage-mediated and ciliated clearance mechanisms. The updated BaP model provides very good prediction of the urinary 3-OH BaP concentrations and the relative difference between measured 3-OH BaP in nonsmokers versus smokers. This PBPK model for inhaled BaP is a preliminary tool for quantifying lung BaP dosimetry in rat and humans and was used to prioritize data needs that would provide significant model refinement and robust internal dosimetry capabilities. PMID:27569524
Modeling reactive transport with particle tracking and kernel estimators
NASA Astrophysics Data System (ADS)
Rahbaralam, Maryam; Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier
2015-04-01
Groundwater reactive transport models are useful to assess and quantify the fate and transport of contaminants in subsurface media and are an essential tool for the analysis of coupled physical, chemical, and biological processes in Earth Systems. Particle Tracking Method (PTM) provides a computationally efficient and adaptable approach to solve the solute transport partial differential equation. On a molecular level, chemical reactions are the result of collisions, combinations, and/or decay of different species. For a well-mixed system, the chem- ical reactions are controlled by the classical thermodynamic rate coefficient. Each of these actions occurs with some probability that is a function of solute concentrations. PTM is based on considering that each particle actually represents a group of molecules. To properly simulate this system, an infinite number of particles is required, which is computationally unfeasible. On the other hand, a finite number of particles lead to a poor-mixed system which is limited by diffusion. Recent works have used this effect to actually model incomplete mix- ing in naturally occurring porous media. In this work, we demonstrate that this effect in most cases should be attributed to a defficient estimation of the concentrations and not to the occurrence of true incomplete mixing processes in porous media. To illustrate this, we show that a Kernel Density Estimation (KDE) of the concentrations can approach the well-mixed solution with a limited number of particles. KDEs provide weighting functions of each particle mass that expands its region of influence, hence providing a wider region for chemical reactions with time. Simulation results show that KDEs are powerful tools to improve state-of-the-art simulations of chemical reactions and indicates that incomplete mixing in diluted systems should be modeled based on alternative conceptual models and not on a limited number of particles.
Particle Hydrodynamics with Material Strength for Multi-Layer Orbital Debris Shield Design
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.
1999-01-01
Three dimensional simulation of oblique hypervelocity impact on orbital debris shielding places extreme demands on computer resources. Research to date has shown that particle models provide the most accurate and efficient means for computer simulation of shield design problems. In order to employ a particle based modeling approach to the wall plate impact portion of the shield design problem, it is essential that particle codes be augmented to represent strength effects. This report describes augmentation of a Lagrangian particle hydrodynamics code developed by the principal investigator, to include strength effects, allowing for the entire shield impact problem to be represented using a single computer code.
Petsev, Nikolai Dimitrov; Leal, L. Gary; Shell, M. Scott
2017-12-21
Hybrid molecular-continuum simulation techniques afford a number of advantages for problems in the rapidly burgeoning area of nanoscale engineering and technology, though they are typically quite complex to implement and limited to single-component fluid systems. We describe an approach for modeling multicomponent hydrodynamic problems spanning multiple length scales when using particle-based descriptions for both the finely-resolved (e.g. molecular dynamics) and coarse-grained (e.g. continuum) subregions within an overall simulation domain. This technique is based on the multiscale methodology previously developed for mesoscale binary fluids [N. D. Petsev, L. G. Leal, and M. S. Shell, J. Chem. Phys. 144, 84115 (2016)], simulatedmore » using a particle-based continuum method known as smoothed dissipative particle dynamics (SDPD). An important application of this approach is the ability to perform coupled molecular dynamics (MD) and continuum modeling of molecularly miscible binary mixtures. In order to validate this technique, we investigate multicomponent hybrid MD-continuum simulations at equilibrium, as well as non-equilibrium cases featuring concentration gradients.« less
NASA Technical Reports Server (NTRS)
Liechty, Derek S.; Lewis, Mark
2010-01-01
A new method of treating electronic energy level transitions as well as linking ionization to electronic energy levels is proposed following the particle-based chemistry model of Bird. Although the use of electronic energy levels and ionization reactions in DSMC are not new ideas, the current method of selecting what level to transition to, how to reproduce transition rates, and the linking of the electronic energy levels to ionization are, to the author s knowledge, novel concepts. The resulting equilibrium temperatures are shown to remain constant, and the electronic energy level distributions are shown to reproduce the Boltzmann distribution. The electronic energy level transition rates and ionization rates due to electron impacts are shown to reproduce theoretical and measured rates. The rates due to heavy particle impacts, while not as favorable as the electron impact rates, compare favorably to values from the literature. Thus, these new extensions to the particle-based chemistry model of Bird provide an accurate method for predicting electronic energy level transition and ionization rates in gases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petsev, Nikolai Dimitrov; Leal, L. Gary; Shell, M. Scott
Hybrid molecular-continuum simulation techniques afford a number of advantages for problems in the rapidly burgeoning area of nanoscale engineering and technology, though they are typically quite complex to implement and limited to single-component fluid systems. We describe an approach for modeling multicomponent hydrodynamic problems spanning multiple length scales when using particle-based descriptions for both the finely-resolved (e.g. molecular dynamics) and coarse-grained (e.g. continuum) subregions within an overall simulation domain. This technique is based on the multiscale methodology previously developed for mesoscale binary fluids [N. D. Petsev, L. G. Leal, and M. S. Shell, J. Chem. Phys. 144, 84115 (2016)], simulatedmore » using a particle-based continuum method known as smoothed dissipative particle dynamics (SDPD). An important application of this approach is the ability to perform coupled molecular dynamics (MD) and continuum modeling of molecularly miscible binary mixtures. In order to validate this technique, we investigate multicomponent hybrid MD-continuum simulations at equilibrium, as well as non-equilibrium cases featuring concentration gradients.« less
Engineering and evaluating drug delivery particles in microfluidic devices.
Björnmalm, Mattias; Yan, Yan; Caruso, Frank
2014-09-28
The development of new and improved particle-based drug delivery is underpinned by an enhanced ability to engineer particles with high fidelity and integrity, as well as increased knowledge of their biological performance. Microfluidics can facilitate these processes through the engineering of spatiotemporally highly controlled environments using designed microstructures in combination with physical phenomena present at the microscale. In this review, we discuss microfluidics in the context of addressing key challenges in particle-based drug delivery. We provide an overview of how microfluidic devices can: (i) be employed to engineer particles, by providing highly controlled interfaces, and (ii) be used to establish dynamic in vitro models that mimic in vivo environments for studying the biological behavior of engineered particles. Finally, we discuss how the flexible and modular nature of microfluidic devices provides opportunities to create increasingly realistic models of the in vivo milieu (including multi-cell, multi-tissue and even multi-organ devices), and how ongoing developments toward commercialization of microfluidic tools are opening up new opportunities for the engineering and evaluation of drug delivery particles. Copyright © 2014 Elsevier B.V. All rights reserved.
Development of Mouse Lung Deposition Models
2015-07-01
information on deposition of ultrafine particles in the URT of mice either by measurements or theoretical modeling. Comparison of the nasal structure of... ultrafine particles in rats to be extended to mice. Based on measurements in the nasal casts of rats, Cheng et al. [12] obtained the following...expression for losses of ultrafine particles in the nasal passages of rats by Brownian diffusion during inhalation and exhalation. γβα− − −=η QD
Modeling cometary photopolarimetric characteristics with Sh-matrix method
NASA Astrophysics Data System (ADS)
Kolokolova, L.; Petrov, D.
2017-12-01
Cometary dust is dominated by particles of complex shape and structure, which are often considered as fractal aggregates. Rigorous modeling of light scattering by such particles, even using parallelized codes and NASA supercomputer resources, is very computer time and memory consuming. We are presenting a new approach to modeling cometary dust that is based on the Sh-matrix technique (e.g., Petrov et al., JQSRT, 112, 2012). This method is based on the T-matrix technique (e.g., Mishchenko et al., JQSRT, 55, 1996) and was developed after it had been found that the shape-dependent factors could be separated from the size- and refractive-index-dependent factors and presented as a shape matrix, or Sh-matrix. Size and refractive index dependences are incorporated through analytical operations on the Sh-matrix to produce the elements of T-matrix. Sh-matrix method keeps all advantages of the T-matrix method, including analytical averaging over particle orientation. Moreover, the surface integrals describing the Sh-matrix elements themselves can be solvable analytically for particles of any shape. This makes Sh-matrix approach an effective technique to simulate light scattering by particles of complex shape and surface structure. In this paper, we present cometary dust as an ensemble of Gaussian random particles. The shape of these particles is described by a log-normal distribution of their radius length and direction (Muinonen, EMP, 72, 1996). Changing one of the parameters of this distribution, the correlation angle, from 0 to 90 deg., we can model a variety of particles from spheres to particles of a random complex shape. We survey the angular and spectral dependencies of intensity and polarization resulted from light scattering by such particles, studying how they depend on the particle shape, size, and composition (including porous particles to simulate aggregates) to find the best fit to the cometary observations.
3-D model-based tracking for UAV indoor localization.
Teulière, Céline; Marchand, Eric; Eck, Laurent
2015-05-01
This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights.
NASA Astrophysics Data System (ADS)
Ngueleu, Stéphane K.; Grathwohl, Peter; Cirpka, Olaf A.
2013-06-01
Colloidal particles can act as carriers for adsorbing pollutants, such as hydrophobic organic pollutants, and enhance their mobility in the subsurface. In this study, we investigate the influence of colloidal particles on the transport of pesticides through saturated porous media by column experiments. We also investigate the effect of particle size on this transport. The model pesticide is lindane (gamma-hexachlorocyclohexane), a representative hydrophobic insecticide which has been banned in 2009 but is still used in many developing countries. The breakthrough curves are analyzed with the help of numerical modeling, in which we examine the minimum model complexity needed to simulate such transport. The transport of lindane without particles can be described by advective-dispersive transport coupled to linear three-site sorption, one site being in local equilibrium and the others undergoing first-order kinetic sorption. In the presence of mobile particles, the total concentration of mobile lindane is increased, that is, lindane is transported not only in aqueous solution but also sorbed onto the smallest, mobile particles. The models developed to simulate separate and associated transport of lindane and the particles reproduced the measurements very well and showed that the adsorption/desorption of lindane to the particles could be expressed by a common first-order rate law, regardless whether the particles are mobile, attached, or strained.
Samadi, Sara; Vaziri, Behrooz Mahmoodzadeh
2017-07-14
Solid extraction process, using the supercritical fluid, is a modern science and technology, which has come in vogue regarding its considerable advantages. In the present article, a new and comprehensive model is presented for predicting the performance and separation yield of the supercritical extraction process. The base of process modeling is partial differential mass balances. In the proposed model, the solid particles are considered twofold: (a) particles with intact structure, (b) particles with destructed structure. A distinct mass transfer coefficient has been used for extraction of each part of solid particles to express different extraction regimes and to evaluate the process accurately (internal mass transfer coefficient was used for the intact-structure particles and external mass transfer coefficient was employed for the destructed-structure particles). In order to evaluate and validate the proposed model, the obtained results from simulations were compared with two series of available experimental data for extraction of chamomile extract with supercritical carbon dioxide, which had an excellent agreement. This is indicative of high potentiality of the model in predicting the extraction process, precisely. In the following, the effect of major parameters on supercritical extraction process, like pressure, temperature, supercritical fluid flow rate, and the size of solid particles was evaluated. The model can be used as a superb starting point for scientific and experimental applications. Copyright © 2017 Elsevier B.V. All rights reserved.
Kinetic model for the mechanical response of suspensions of sponge-like particles.
Hütter, Markus; Faber, Timo J; Wyss, Hans M
2012-01-01
A dynamic two-scale model is developed that describes the stationary and transient mechanical behavior of concentrated suspensions made of highly porous particles. Particularly, we are interested in particles that not only deform elastically, but also can swell or shrink by taking up or expelling the viscous solvent from their interior, leading to rate-dependent deformability of the particles. The fine level of the model describes the evolution of particle centers and their current sizes, while the shapes are at present not taken into account. The versatility of the model permits inclusion of density- and temperature-dependent particle interactions, and hydrodynamic interactions, as well as to implement insight into the mechanism of swelling and shrinking. The coarse level of the model is given in terms of macroscopic hydrodynamics. The two levels are mutually coupled, since the flow changes the particle configuration, while in turn the configuration gives rise to stress contributions, that eventually determine the macroscopic mechanical properties of the suspension. Using a thermodynamic procedure for the model development, it is demonstrated that the driving forces for position change and for size change are derived from the same potential energy. The model is translated into a form that is suitable for particle-based Brownian dynamics simulations for performing rheological tests. Various possibilities for connection with experiments, e.g. rheological and structural, are discussed.
2010-09-01
estimation of total exposure at any toxicological endpoint in the body. This effort is a significant contribution as it highlights future research needs...rigorous modeling of the nanoparticle transport by including physico-chemical properties of engineered particles. Similarly, toxicological dose-response...exposure risks as compared to larger sized particles of the same material. Although the toxicology of a base material may be thoroughly defined, the
The research of breaking rock with liquid-solid two-phase jet flow
NASA Astrophysics Data System (ADS)
Cheng, X. Z.; Ren, F. S.; Fang, T. C.
2018-03-01
Abstracts. Particle impact drilling is an efficient way of breaking rock, which is mainly used in deep drilling and ultra-deep drilling. The differential equation was established based on the theory of Hertz and Newton’s second law, through the analysis of particle impact rock, the depth of particles into the rock was obtained. The mathematical model was established based on the effect of water impact crack. The research results show when water jet speed is more than 40 m/s, rock stability coefficient is more than 1.0, the rock fracture appear. Through the experimental research of particle impact drilling facilities, analysis of cuttings and the crack size which was analyzed through Scanning electron microscope consistent with the theoretical calculation, the validity of the model was verified.
Li, Pan; Asokanathan, Catpagavalli; Liu, Fang; Khaing, Kyi Kyi; Kmiec, Dorota; Wei, Xiaoqing; Song, Bing; Xing, Dorothy; Kong, Deling
2016-11-20
Poly(lactic-co-glycolic acid) (PLGA) based nano/micro particles were investigated as a potential vaccine platform for pertussis antigen. Presentation of pertussis toxoid as nano/micro particles (NP/MP) gave similar antigen-specific IgG responses in mice compared to soluble antigen. Notably, in cell line based assays, it was found that PLGA based nano/micro particles enhanced the phagocytosis of fluorescent antigen-nano/micro particles by J774.2 murine monocyte/macrophage cells compared to soluble antigen. More importantly, when mice were immunised with the antigen-nano/micro particles they significantly increased antigen-specific Th1 cytokines INF-γ and IL-17 secretion in splenocytes after in vitro re-stimulation with heat killed Bordetalla pertussis, indicating the induction of a Th1/Th17 response. Also, presentation of pertussis antigen in a NP/MP formulation is able to provide protection against respiratory infection in a murine model. Thus, the NP/MP formulation may provide an alternative to conventional acellular vaccines to achieve a more balanced Th1/Th2 immune response. Copyright © 2016 Elsevier B.V. All rights reserved.
Single-particle dynamics of the Anderson model: a local moment approach
NASA Astrophysics Data System (ADS)
Glossop, Matthew T.; Logan, David E.
2002-07-01
A non-perturbative local moment approach to single-particle dynamics of the general asymmetric Anderson impurity model is developed. The approach encompasses all energy scales and interaction strengths. It captures thereby strong coupling Kondo behaviour, including the resultant universal scaling behaviour of the single-particle spectrum; as well as the mixed valence and essentially perturbative empty orbital regimes. The underlying approach is physically transparent and innately simple, and as such is capable of practical extension to lattice-based models within the framework of dynamical mean-field theory.
Actin-based propulsion of a microswimmer.
Leshansky, A M
2006-07-01
A simple hydrodynamic model of actin-based propulsion of microparticles in dilute cell-free cytoplasmic extracts is presented. Under the basic assumption that actin polymerization at the particle surface acts as a force dipole, pushing apart the load and the free (nonanchored) actin tail, the propulsive velocity of the microparticle is determined as a function of the tail length, porosity, and particle shape. The anticipated velocities of the cargo displacement and the rearward motion of the tail are in good agreement with recently reported results of biomimetic experiments. A more detailed analysis of the particle-tail hydrodynamic interaction is presented and compared to the prediction of the simplified model.
Electrokinetic Aggregation of Colloidal Particles on Electrodes
NASA Astrophysics Data System (ADS)
Anderson, John L.; Solomentsev, Yuri E.; Guelcher, Scott A.
1999-11-01
Colloidal particles deposited on an electrode have been observed to attract each other and form clusters in the presence of an applied electric field. This aggregation is important to the formation of dense monolayer films during electrophoretic depositon processes. Under dc fields two particles attract each other over a length scale comparable to the particle size, and the velocity of approach between two particles is proportional to the applied electric field and the particles' zeta potential. We have developed a theory for particle aggregation based on electroosmotic flow about each deposited particle. Experimental results for the relative motion of two particles are in good quantitative agreement with the theory. Our recent experiments with ac fields also show attraction between particles that is roughly proportional to the rms electric field but inversely proportional to the frequency. We discuss here a model based on electrokinetic processes that can account for some of the observations in ac fields.
Oakes, Jessica M; Marsden, Alison L; Grandmont, Celine; Shadden, Shawn C; Darquenne, Chantal; Vignon-Clementel, Irene E
2014-04-01
Image-based in silico modeling tools provide detailed velocity and particle deposition data. However, care must be taken when prescribing boundary conditions to model lung physiology in health or disease, such as in emphysema. In this study, the respiratory resistance and compliance were obtained by solving an inverse problem; a 0D global model based on healthy and emphysematous rat experimental data. Multi-scale CFD simulations were performed by solving the 3D Navier-Stokes equations in an MRI-derived rat geometry coupled to a 0D model. Particles with 0.95 μm diameter were tracked and their distribution in the lung was assessed. Seven 3D-0D simulations were performed: healthy, homogeneous, and five heterogeneous emphysema cases. Compliance (C) was significantly higher (p = 0.04) in the emphysematous rats (C = 0.37 ± 0.14 cm(3)/cmH2O) compared to the healthy rats (C = 0.25 ± 0.04 cm(3)/cmH2O), while the resistance remained unchanged (p = 0.83). There were increases in airflow, particle deposition in the 3D model, and particle delivery to the diseased regions for the heterogeneous cases compared to the homogeneous cases. The results highlight the importance of multi-scale numerical simulations to study airflow and particle distribution in healthy and diseased lungs. The effect of particle size and gravity were studied. Once available, these in silico predictions may be compared to experimental deposition data.
NASA Technical Reports Server (NTRS)
Chen, Y. S.; Farmer, R. C.
1992-01-01
A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.
SPH modelling of depth-limited turbulent open channel flows over rough boundaries.
Kazemi, Ehsan; Nichols, Andrew; Tait, Simon; Shao, Songdong
2017-01-10
A numerical model based on the smoothed particle hydrodynamics method is developed to simulate depth-limited turbulent open channel flows over hydraulically rough beds. The 2D Lagrangian form of the Navier-Stokes equations is solved, in which a drag-based formulation is used based on an effective roughness zone near the bed to account for the roughness effect of bed spheres and an improved sub-particle-scale model is applied to account for the effect of turbulence. The sub-particle-scale model is constructed based on the mixing-length assumption rather than the standard Smagorinsky approach to compute the eddy-viscosity. A robust in/out-flow boundary technique is also proposed to achieve stable uniform flow conditions at the inlet and outlet boundaries where the flow characteristics are unknown. The model is applied to simulate uniform open channel flows over a rough bed composed of regular spheres and validated by experimental velocity data. To investigate the influence of the bed roughness on different flow conditions, data from 12 experimental tests with different bed slopes and uniform water depths are simulated, and a good agreement has been observed between the model and experimental results of the streamwise velocity and turbulent shear stress. This shows that both the roughness effect and flow turbulence should be addressed in order to simulate the correct mechanisms of turbulent flow over a rough bed boundary and that the presented smoothed particle hydrodynamics model accomplishes this successfully. © 2016 The Authors International Journal for Numerical Methods in Fluids Published by John Wiley & Sons Ltd.
Coarse-graining to the meso and continuum scales with molecular-dynamics-like models
NASA Astrophysics Data System (ADS)
Plimpton, Steve
Many engineering-scale problems that industry or the national labs try to address with particle-based simulations occur at length and time scales well beyond the most optimistic hopes of traditional coarse-graining methods for molecular dynamics (MD), which typically start at the atomic scale and build upward. However classical MD can be viewed as an engine for simulating particles at literally any length or time scale, depending on the models used for individual particles and their interactions. To illustrate I'll highlight several coarse-grained (CG) materials models, some of which are likely familiar to molecular-scale modelers, but others probably not. These include models for water droplet freezing on surfaces, dissipative particle dynamics (DPD) models of explosives where particles have internal state, CG models of nano or colloidal particles in solution, models for aspherical particles, Peridynamics models for fracture, and models of granular materials at the scale of industrial processing. All of these can be implemented as MD-style models for either soft or hard materials; in fact they are all part of our LAMMPS MD package, added either by our group or contributed by collaborators. Unlike most all-atom MD simulations, CG simulations at these scales often involve highly non-uniform particle densities. So I'll also discuss a load-balancing method we've implemented for these kinds of models, which can improve parallel efficiencies. From the physics point-of-view, these models may be viewed as non-traditional or ad hoc. But because they are MD-style simulations, there's an opportunity for physicists to add statistical mechanics rigor to individual models. Or, in keeping with a theme of this session, to devise methods that more accurately bridge models from one scale to the next.
NASA Astrophysics Data System (ADS)
Khadilkar, Aditi B.
The utility of fluidized bed reactors for combustion and gasification can be enhanced if operational issues such as agglomeration are mitigated. The monetary and efficiency losses could be avoided through a mechanistic understanding of the agglomeration process and prediction of operational conditions that promote agglomeration. Pilot-scale experimentation prior to operation for each specific condition can be cumbersome and expensive. So the development of a mathematical model would aid predictions. With this motivation, the study comprised of the following model development stages- 1) development of an agglomeration modeling methodology based on binary particle collisions, 2) study of heterogeneities in ash chemical composition and gaseous atmosphere, 3) computation of a distribution of particle collision frequencies based on granular physics for a poly-disperse particle size distribution, 4) combining the ash chemistry and granular physics inputs to obtain agglomerate growth probabilities and 5) validation of the modeling methodology. The modeling methodology comprised of testing every binary particle collision in the system for sticking, based on the extent of dissipation of the particles' kinetic energy through viscous dissipation by slag-liquid (molten ash) covering the particles. In the modeling methodology developed in this study, thermodynamic equilibrium calculations are used to estimate the amount of slag-liquid in the system, and the changes in particle collision frequencies are accounted for by continuously tracking the number density of the various particle sizes. In this study, the heterogeneities in chemical composition of fuel ash were studied by separating the bulk fuel into particle classes that are rich in specific minerals. FactSage simulations were performed on two bituminous coals and an anthracite to understand the effect of particle-level heterogeneities on agglomeration. The mineral matter behavior of these constituent classes was studied. Each particle class undergoes distinct transformations of mineral matter at fluidized bed operating temperatures, as determined by using high temperature X-ray diffraction, thermo-mechanical analysis and scanning electron microscopy with energy dispersive X-ray spectroscopy (SEM-EDX). For the incorporation of a particle size distribution, bottom ash from an operating plant was divided into four size intervals and the system granular temperatures and dynamic bed height were computed using MFIX, a CFD simulation software. The kinetic theory of granular flow was used to obtain a distribution of binary collision frequencies for the entire particle size distribution. With this distribution of collision frequencies, which is computed based on hydrodynamics and granular physics of the poly-disperse system, as the particles grow, defluidize and decrease in number, the collision frequency also decreases. Under the conditions studied, the growth rate in the latter half of the run decreased to almost 1/5th the initial rate, with this decrease in collision frequency. This interdependent effect of chemistry and physics-based parameters, at the particle-level, was used to predict the agglomerate growth probabilities of Pittsburgh No. 8, Illinois No. 6 and Skidmore anthracite coals in this study, to illustrate the utility of the modeling methodology. The study also showed that agglomerate growth probability significantly increased above 15 to 20 wt. % slag. It was limited by ash chemistry at levels below this amount. Ash agglomerates were generated in a laboratory-scale fluidized bed combustor at Penn State to support the proposed agglomerate growth mechanism. This study also attempted to gain a mechanistic understanding of agglomerate growth with particle-level initiation occurring at the relatively low operating temperatures of about 950 °C, found in some fluidized beds. The results of this study indicated that, for the materials examined, agglomerate growth in fluidized bed combustors and gasifiers is initiated at the particle-level by low-melting components rich in iron- and calcium-based minerals. Although the bulk ash chemical composition does not indicate potential for agglomeration, study of particle-level heterogeneities revealed that agglomeration can begin at lower temperatures than the fluidized bed operating temperatures of 850 °C. After initiation at the particle-level, more slag is observed to form from alumino-silicate components at about 50 to 100 °C higher temperatures caused by changes in the system, and agglomerate growth propagates in the bed. A post-mortem study of ash agglomerates using SEM-EDX helped to identify stages of agglomerate growth. Additionally, the modeling methodology developed was used to simulate agglomerate growth in a laboratory-scale fluidized bed combustor firing palm shells (biomass), reported in the literature. A comparison of the defluidization time obtained by simulations to the experimental values reported in the case-study was made for the different operating conditions studied. This indicated that although the simulation results were comparable to those reported in the case study, modifications such as inclusion of heat transfer calculations to determine particle temperature resulting from carbon conversion would improve the predictive capabilities. (Abstract shortened by ProQuest.).
Optimizing micromixer design for enhancing dielectrophoretic microconcentrator performance.
Lee, Hsu-Yi; Voldman, Joel
2007-03-01
We present an investigation into optimizing micromixer design for enhancing dielectrophoretic (DEP) microconcentrator performance. DEP-based microconcentrators use the dielectrophoretic force to collect particles on electrodes. Because the DEP force generated by electrodes decays rapidly away from the electrodes, DEP-based microconcentrators are only effective at capturing particles from a limited cross section of the input liquid stream. Adding a mixer can circulate the input liquid, increasing the probability that particles will drift near the electrodes for capture. Because mixers for DEP-based microconcentrators aim to circulate particles, rather than mix two species, design specifications for such mixers may be significantly different from that for conventional mixers. Here we investigated the performance of patterned-groove micromixers on particle trapping efficiency in DEP-based microconcentrators numerically and experimentally. We used modeling software to simulate the particle motion due to various forces on the particle (DEP, hydrodynamic, etc.), allowing us to predict trapping efficiency. We also conducted trapping experiments and measured the capture efficiency of different micromixer configurations, including the slanted groove, staggered herringbone, and herringbone mixers. Finally, we used these analyses to illustrate the design principles of mixers for DEP-based concentrators.
Compressibility Effects on Particle-Fluid Interaction Force for Eulerian-Eulerian Simulations
NASA Astrophysics Data System (ADS)
Akiki, Georges; Francois, Marianne; Zhang, Duan
2017-11-01
Particle-fluid interaction forces are essential in modeling multiphase flows. Several models can be found in the literature based on empirical, numerical, and experimental results from various simplified flow conditions. Some of these models also account for finite Mach number effects. Using these models is relatively straightforward with Eulerian-Lagrangian calculations if the model for the total force on particles is used. In Eulerian-Eulerian simulations, however, there is the pressure gradient terms in the momentum equation for particles. For low Mach number flows, the pressure gradient force is negligible if the particle density is much greater than that of the fluid. For supersonic flows where a standing shock is present, even for a steady and uniform flow, it is unclear whether the significant pressure-gradient force should to be separated out from the particle force model. To answer this conceptual question, we perform single-sphere fully-resolved DNS simulations for a wide range of Mach numbers. We then examine whether the total force obtained from the DNS can be categorized into well-established models, such as the quasi-steady, added-mass, pressure-gradient, and history forces. Work sponsored by Advanced Simulation and Computing (ASC) program of NNSA and LDRD-CNLS of LANL.
Bifurcations: Focal Points of Particle Adhesion in Microvascular Networks
Prabhakarpandian, Balabhaskar; Wang, Yi; Rea-Ramsey, Angela; Sundaram, Shivshankar; Kiani, Mohammad F.; Pant, Kapil
2011-01-01
Objective Particle adhesion in vivo is dependent on microcirculation environment which features unique anatomical (bifurcations, tortuosity, cross-sectional changes) and physiological (complex hemodynamics) characteristics. The mechanisms behind these complex phenomena are not well understood. In this study, we used a recently developed in vitro model of microvascular networks, called Synthetic Microvascular Network, for characterizing particle adhesion patterns in the microcirculation. Methods Synthetic microvascular networks were fabricated using soft lithography processes followed by particle adhesion studies using avidin and biotin-conjugated microspheres. Particle adhesion patterns were subsequently analyzed using CFD based modeling. Results Experimental and modeling studies highlighted the complex and heterogeneous fluid flow patterns encountered by particles in microvascular networks resulting in significantly higher propensity of adhesion (>1.5X) near bifurcations compared to the branches of the microvascular networks. Conclusion Bifurcations are the focal points of particle adhesion in microvascular networks. Changing flow patterns and morphology near bifurcations are the primary factors controlling the preferential adhesion of functionalized particles in microvascular networks. Synthetic microvascular networks provide an in vitro framework for understanding particle adhesion. PMID:21418388
TMI-2 upper-core particle bed thermal behavior
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuan, P.
1987-08-01
Models of dryout heat fluxes of particle beds believed to be applicable to the TMI-2 upper-core particle bed are reviewed and developed. A simplified Lipinski model and a model based on flooding are shown to agree between themselves and with experiments. These models are applied to the calculation of the dryout heat flux of the TMI-2 upper-core particle bed. The TMI-2 upper-core particle bed is shown to be: (a) coolable, if little heat is transferred to it from the consolidated region below, (b) only marginally coolable, if not uncoolable, before material relocation from the consolidated region, if most of themore » heat in the consolidiated region is transferred to it, and (c) coolable, after the relocation, regardless of heat transfer from the remaining consolidated region. Based on an analogy to quenching experiments, which show that the heat flux during the quench of a particle bed is approximately equal to the dryout heat flux, the time required to quench the TMI-2 upper-core particle bed from 2000 K to the saturation temperature of water during the accident is estimated. The bed was either quenched by 225 min after the initiation of the accident (assuming no heat was transferred to it from the consolidated region) or, at the latest, by 245 min (20 min after molten material relocation to the lower plenum from the consolidated region; assuming most of the heat generated in the consolidated region, both before and after the relocation, was transferred to the particle bed).« less
Sato, T; Sihver, L; Iwase, H; Nakashima, H; Niita, K
2005-01-01
In order to estimate the biological effects of HZE particles, an accurate knowledge of the physics of interaction of HZE particles is necessary. Since the heavy ion transport problem is a complex one, there is a need for both experimental and theoretical studies to develop accurate transport models. RIST and JAERI (Japan), GSI (Germany) and Chalmers (Sweden) are therefore currently developing and bench marking the General-Purpose Particle and Heavy-Ion Transport code System (PHITS), which is based on the NMTC and MCNP for nucleon/meson and neutron transport respectively, and the JAM hadron cascade model. PHITS uses JAERI Quantum Molecular Dynamics (JQMD) and the Generalized Evaporation Model (GEM) for calculations of fission and evaporation processes, a model developed at NASA Langley for calculation of total reaction cross sections, and the SPAR model for stopping power calculations. The future development of PHITS includes better parameterization in the JQMD model used for the nucleus-nucleus reactions, and improvement of the models used for calculating total reaction cross sections, and addition of routines for calculating elastic scattering of heavy ions, and inclusion of radioactivity and burn up processes. As a part of an extensive bench marking of PHITS, we have compared energy spectra of secondary neutrons created by reactions of HZE particles with different targets, with thicknesses ranging from <1 to 200 cm. We have also compared simulated and measured spatial, fluence and depth-dose distributions from different high energy heavy ion reactions. In this paper, we report simulations of an accelerator-based shielding experiment, in which a beam of 1 GeV/n Fe-ions has passed through thin slabs of polyethylene, Al, and Pb at an acceptance angle up to 4 degrees. c2005 Published by Elsevier Ltd on behalf of COSPAR.
A model of impulsive acceleration and transport of energetic particles in Mercury's magnetosphere
NASA Technical Reports Server (NTRS)
Baker, D. N.; Simpson, J. A.; Eraker, J. H.
1986-01-01
A qualitative model of substorm processes in the Mercury magnetosphere is presented based on Mariner 10 observations obtained in 1974-1975. The model is predicated on close analogies observed with the terrestrial case. Particular emphasis is given to energetic particle phenomena as observed by Mariner on March 29, 1974. The suggestion is supported that energetic particles up to about 500 keV are produced by strong induced electric fields at 3 to about 6 Mercury radii in the Hermean tail in association with substorm neutral line formation. The bursts of energetic particles produced are, in this model, subsequently confined on closed field lines near Mercury and drift adiabatically on quasi-trapped orbits for many tens of seconds. Such gradient and curvature drift of the particles can explain prominent periodicities of 5-10 s seen in the Mariner for greater than 170-keV electron flux profiles.
McNair, James N; Newbold, J Denis
2012-05-07
Most ecological studies of particle transport in streams that focus on fine particulate organic matter or benthic invertebrates use the Exponential Settling Model (ESM) to characterize the longitudinal pattern of particle settling on the bed. The ESM predicts that if particles are released into a stream, the proportion that have not yet settled will decline exponentially with transport time or distance and will be independent of the release elevation above the bed. To date, no credible basis in fluid mechanics has been established for this model, nor has it been rigorously tested against more-mechanistic alternative models. One alternative is the Local Exchange Model (LEM), which is a stochastic advection-diffusion model that includes both longitudinal and vertical spatial dimensions and is based on classical fluid mechanics. The LEM predicts that particle settling will be non-exponential in the near field but will become exponential in the far field, providing a new theoretical justification for far-field exponential settling that is based on plausible fluid mechanics. We review properties of the ESM and LEM and compare these with available empirical evidence. Most evidence supports the prediction of both models that settling will be exponential in the far field but contradicts the ESM's prediction that a single exponential distribution will hold for all transport times and distances. Copyright © 2012 Elsevier Ltd. All rights reserved.
Perkins, Elizabeth L; Basu, Saikat; Garcia, Guilherme J M; Buckmire, Robert A; Shah, Rupali N; Kimbell, Julia S
2018-03-01
Objectives Vocal fold granulomas are benign lesions of the larynx commonly caused by gastroesophageal reflux, intubation, and phonotrauma. Current medical therapy includes inhaled corticosteroids to target inflammation that leads to granuloma formation. Particle sizes of commonly prescribed inhalers range over 1 to 4 µm. The study objective was to use computational fluid dynamics to investigate deposition patterns over a range of particle sizes of inhaled corticosteroids targeting the larynx and vocal fold granulomas. Study Design Retrospective, case-specific computational study. Setting Tertiary academic center. Subjects/Methods A 3-dimensional anatomically realistic computational model of a normal adult airway from mouth to trachea was constructed from 3 computed tomography scans. Virtual granulomas of varying sizes and positions along the vocal fold were incorporated into the base model. Assuming steady-state, inspiratory, turbulent airflow at 30 L/min, computational fluid dynamics was used to simulate respiratory transport and deposition of inhaled corticosteroid particles ranging over 1 to 20 µm. Results Laryngeal deposition in the base model peaked for particle sizes 8 to 10 µm (2.8%-3.5%). Ideal sizes ranged over 6 to 10, 7 to 13, and 7 to 14 µm for small, medium, and large granuloma sizes, respectively. Glottic deposition was maximal at 10.8% for 9-µm-sized particles for the large posterior granuloma, 3 times the normal model (3.5%). Conclusion As the virtual granuloma size increased and the location became more posterior, glottic deposition and ideal particle size generally increased. This preliminary study suggests that inhalers with larger particle sizes, such as fluticasone propionate dry-powder inhaler, may improve laryngeal drug deposition. Most commercially available inhalers have smaller particles than suggested here.
Thermal conduction in particle packs via finite elements
NASA Astrophysics Data System (ADS)
Lechman, Jeremy B.; Yarrington, Cole; Erikson, William; Noble, David R.
2013-06-01
Conductive transport in heterogeneous materials composed of discrete particles is a fundamental problem for a number of applications. While analytical results and rigorous bounds on effective conductivity in mono-sized particle dispersions are well established in the literature, the methods used to arrive at these results often fail when the average size of particle clusters becomes large (i.e., near the percolation transition where particle contact networks dominate the bulk conductivity). Our aim is to develop general, efficient numerical methods that would allow us to explore this behavior and compare to a recent microstructural description of conduction in this regime. To this end, we present a finite element analysis approach to modeling heat transfer in granular media with the goal of predicting effective bulk thermal conductivities of particle-based heterogeneous composites. Our approach is verified against theoretical predictions for random isotropic dispersions of mono-disperse particles at various volume fractions up to close packing. Finally, we present results for the probability distribution of the effective conductivity in particle dispersions generated by Brownian dynamics, and suggest how this might be useful in developing stochastic models of effective properties based on the dynamical process involved in creating heterogeneous dispersions.
Phase space effects on fast ion distribution function modeling in tokamaks
NASA Astrophysics Data System (ADS)
Podestà, M.; Gorelenkova, M.; Fredrickson, E. D.; Gorelenkov, N. N.; White, R. B.
2016-05-01
Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.
Phase space effects on fast ion distribution function modeling in tokamaks
White, R. B. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Podesta, M. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Gorelenkova, M. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Fredrickson, E. D. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Gorelenkov, N. N. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)
2016-06-01
Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.
Constraints on Particle Sizes in Saturn's G Ring from Ring Plane Crossing Observations
NASA Astrophysics Data System (ADS)
Throop, H. B.; Esposito, L. W.
1996-09-01
The ring plane crossings in 1995--96 allowed earth-based observations of Saturn's diffuse rings (Nicholson et al., Nature 272, 1996; De Pater et al. Icarus 121, 1996) at a phase angle of alpha ~ 5 deg . We calculate the G ring reflectance for steady state distributions of dust to km-sized bodies from a range of physical models which track the evolution of the G ring from its initial formation following the disruption of a progenitor satellite (Canup & Esposito 1996, \\ Icarus,\\ in press). We model scattering from the ring's small particles using an exact T-matrix method for nonspherical, absorptive particles (Mishchenko et al. 1996, \\ JGR Atmo., in press), large particles using the phase function and spectrum of Europa, and intermediate particles using a linear combination of the small and large limits. Two distinct particle size distributions from the CE96 model fit the observed spectrum. The first is that of a dusty ring, with the majority of ring reflectance in dust particles of relatedly shallow power law size distribution exponent q ~ 2.5. The second has equal reflectances from a) dust in the range q ~ 3.5 -- 6.5 and b) macroscopic bodies > 1 mm. In this second case, the respective slightly blue and red components combine to form the observed relatively flat spectrum. Although light scattering in backscatter is not sufficient to completely constrain the G ring size distribution, the distributions predicted by the CE96 model can explain the earth-based observations.
Predictions of spray combustion interactions
NASA Technical Reports Server (NTRS)
Shuen, J. S.; Solomon, A. S. P.; Faeth, G. M.
1984-01-01
Mean and fluctuating phase velocities; mean particle mass flux; particle size; and mean gas-phase Reynolds stress, composition and temperature were measured in stationary, turbulent, axisymmetric, and flows which conform to the boundary layer approximations while having well-defined initial and boundary conditions in dilute particle-laden jets, nonevaporating sprays, and evaporating sprays injected into a still air environment. Three models of the processes, typical of current practice, were evaluated. The local homogeneous flow and deterministic separated flow models did not provide very satisfactory predictions over the present data base. In contrast, the stochastic separated flow model generally provided good predictions and appears to be an attractive approach for treating nonlinear interphase transport processes in turbulent flows containing particles (drops).
Ramirez, Samuel A.; Elston, Timothy C.
2018-01-01
Polarity establishment, the spontaneous generation of asymmetric molecular distributions, is a crucial component of many cellular functions. Saccharomyces cerevisiae (yeast) undergoes directed growth during budding and mating, and is an ideal model organism for studying polarization. In yeast and many other cell types, the Rho GTPase Cdc42 is the key molecular player in polarity establishment. During yeast polarization, multiple patches of Cdc42 initially form, then resolve into a single front. Because polarization relies on strong positive feedback, it is likely that the amplification of molecular-level fluctuations underlies the generation of multiple nascent patches. In the absence of spatial cues, these fluctuations may be key to driving polarization. Here we used particle-based simulations to investigate the role of stochastic effects in a Turing-type model of yeast polarity establishment. In the model, reactions take place either between two molecules on the membrane, or between a cytosolic and a membrane-bound molecule. Thus, we developed a computational platform that explicitly simulates molecules at and near the cell membrane, and implicitly handles molecules away from the membrane. To evaluate stochastic effects, we compared particle simulations to deterministic reaction-diffusion equation simulations. Defining macroscopic rate constants that are consistent with the microscopic parameters for this system is challenging, because diffusion occurs in two dimensions and particles exchange between the membrane and cytoplasm. We address this problem by empirically estimating macroscopic rate constants from appropriately designed particle-based simulations. Ultimately, we find that stochastic fluctuations speed polarity establishment and permit polarization in parameter regions predicted to be Turing stable. These effects can operate at Cdc42 abundances expected of yeast cells, and promote polarization on timescales consistent with experimental results. To our knowledge, our work represents the first particle-based simulations of a model for yeast polarization that is based on a Turing mechanism. PMID:29529021
Modelling Students' Visualisation of Chemical Reaction
ERIC Educational Resources Information Center
Cheng, Maurice M. W.; Gilbert, John K.
2017-01-01
This paper proposes a model-based notion of "submicro representations of chemical reactions". Based on three structural models of matter (the simple particle model, the atomic model and the free electron model of metals), we suggest there are two major models of reaction in school chemistry curricula: (a) reactions that are simple…
Longest, P Worth; Vinchurkar, Samir
2007-04-01
A number of research studies have employed a wide variety of mesh styles and levels of grid convergence to assess velocity fields and particle deposition patterns in models of branching biological systems. Generating structured meshes based on hexahedral elements requires significant time and effort; however, these meshes are often associated with high quality solutions. Unstructured meshes that employ tetrahedral elements can be constructed much faster but may increase levels of numerical diffusion, especially in tubular flow systems with a primary flow direction. The objective of this study is to better establish the effects of mesh generation techniques and grid convergence on velocity fields and particle deposition patterns in bifurcating respiratory models. In order to achieve this objective, four widely used mesh styles including structured hexahedral, unstructured tetrahedral, flow adaptive tetrahedral, and hybrid grids have been considered for two respiratory airway configurations. Initial particle conditions tested are based on the inlet velocity profile or the local inlet mass flow rate. Accuracy of the simulations has been assessed by comparisons to experimental in vitro data available in the literature for the steady-state velocity field in a single bifurcation model as well as the local particle deposition fraction in a double bifurcation model. Quantitative grid convergence was assessed based on a grid convergence index (GCI), which accounts for the degree of grid refinement. The hexahedral mesh was observed to have GCI values that were an order of magnitude below the unstructured tetrahedral mesh values for all resolutions considered. Moreover, the hexahedral mesh style provided GCI values of approximately 1% and reduced run times by a factor of 3. Based on comparisons to empirical data, it was shown that inlet particle seedings should be consistent with the local inlet mass flow rate. Furthermore, the mesh style was found to have an observable effect on cumulative particle depositions with the hexahedral solution most closely matching empirical results. Future studies are needed to assess other mesh generation options including various forms of the hybrid configuration and unstructured hexahedral meshes.
An Alternative Proposal for the Graphical Representation of Anticolor Charge
ERIC Educational Resources Information Center
Wiener, Gergried J.; Schmeling, Sascha M.; Hopf, Martin
2017-01-01
We have developed a learning unit based on the Standard Model of particle physics, featuring novel typographic illustrations of elementary particles and particle systems. Since the unit includes antiparticles and systems of antiparticles, a visualization of anticolor charge was required. We propose an alternative to the commonly used…
A Model-Based Prognostics Approach Applied to Pneumatic Valves
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Goebel, Kai
2011-01-01
Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.
Properties of forced convection experimental with silicon carbide based nano-fluids
NASA Astrophysics Data System (ADS)
Soanker, Abhinay
With the advent of nanotechnology, many fields of Engineering and Science took a leap to the next level of advancements. The broad scope of nanotechnology initiated many studies of heat transfer and thermal engineering. Nano-fluids are one such technology and can be thought of as engineered colloidal fluids with nano-sized colloidal particles. There are different types of nano-fluids based on the colloidal particle and base fluids. Nano-fluids can primarily be categorized into metallic, ceramics, oxide, magnetic and carbon based. The present work is a part of investigation of the thermal and rheological properties of ceramic based nano-fluids. alpha-Silicon Carbide based nano-fluid with Ethylene Glycol and water mixture 50-50% volume concentration was used as the base fluid here. This work is divided into three parts; Theoretical modelling of effective thermal conductivity (ETC) of colloidal fluids, study of Thermal and Rheological properties of alpha-SiC nano-fluids, and determining the Heat Transfer properties of alpha-SiC nano-fluids. In the first part of this work, a theoretical model for effective thermal conductivity (ETC) of static based colloidal fluids was formulated based on the particle size, shape (spherical), thermal conductivity of base fluid and that of the colloidal particle, along with the particle distribution pattern in the fluid. A MATLAB program is generated to calculate the details of this model. The model is specifically derived for least and maximum ETC enhancement possible and thereby the lower and upper bounds was determined. In addition, ETC is also calculated for uniform colloidal distribution pattern. Effect of volume concentration on ETC was studied. No effect of particle size was observed for particle sizes below a certain value. Results of this model were compared with Wiener bounds and Hashin- Shtrikman bounds. The second part of this work is a study of thermal and rheological properties of alpha-Silicon Carbide based nano-fluids. The nano-fluid properties were tested at three different volume concentrations; 0.55%, 1% and 1.6%. Thermal conductivity was measured for the three-volume concentration as function of temperature. Thermal conductivity enhancement increased with the temperature and may be attributed to increased Brownian motion of colloidal particles at higher temperatures. Measured thermal conductivity values are compared with results obtained by theoretical model derived in this work. Effect of temperature and volume concentration on viscosity was also measured and reported. Viscosity increase and related consequences are important issues for the use of nano-fluids. Extensive measurements of heat transfer and pressure drop for forced convection in circular pipes with nano-fluids was also conducted. Parameters such as heat transfer coefficient, Nusselt number, pressure drop and a thermal hydraulic performance factor that takes into account the gains made by increase in thermal conductivity as well as penalties related to increase in pressure drop are evaluated for laminar and transition flow regimes. No significant improvement in heat transfer (Nusselt number) compared to its based fluid was observed. It is also observed that the values evaluated for the thermal-hydraulic performance factor (change in heat transfer/change in pressure drop) was under unity for many flow conditions indicating poor overall applicability of SiC based nano-fluids.
NASA Astrophysics Data System (ADS)
Barberis, Lucas; Peruani, Fernando
2016-12-01
We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit—due to the VC that breaks Newton's third law—various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving—locally polar—files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.
Barberis, Lucas; Peruani, Fernando
2016-12-09
We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit-due to the VC that breaks Newton's third law-various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving-locally polar-files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.
NASA Astrophysics Data System (ADS)
Lv, Lihui; Liu, Wenqing; Zhang, Tianshu; Chen, Zhenyi; Dong, Yunsheng; Fan, Guangqiang; Xiang, Yan; Yao, Yawei; Yang, Nan; Chu, Baolin; Teng, Man; Shu, Xiaowen
2017-09-01
Fine particle with diameter <2.5 μm (PM2.5) have important direct and indirect effects on human life and activities. However, the studies of fine particle were limited by the lack of monitoring data obtained with multiple fixed site sampling strategies. Mobile monitoring has provided a means for broad measurement of fine particles. In this research, the potential use of mobile lidar to map the distribution and transport of fine particles was discussed. The spatial and temporal distributions of particle extinction, PM2.5 mass concentration and regional transport flux of fine particle in the planetary boundary layer were investigated with the use of vehicle-based mobile lidar and wind field data from north China. Case studies under different pollution levels in Beijing were presented to evaluate the contribution of regional transport. A vehicle-based mobile lidar system was used to obtain the spatial and temporal distributions of particle extinction in the measurement route. Fixed point lidar and a particulate matter sampler were operated next to each other at the University of Chinese Academy of Science (UCAS) in Beijing to determine the relationship between the particle extinction coefficient and PM2.5 mass concentration. The correlation coefficient (R2) between the particle extinction coefficient and PM2.5 mass concentration was found to be over 0.8 when relative humidity (RH) was less than 90%. A mesoscale meteorological model, the Weather Research and Forecasting (WRF) model, was used to obtain profiles of the horizontal wind speed, wind direction and relative humidity. A vehicle-based mobile lidar technique was applied to estimate transport flux based on the PM2.5 profile and vertical profile of wind data. This method was applicable when hygroscopic growth can be neglected (relatively humidity<90%). Southwest was found to be the main pathway of Beijing during the experiments.
Modeling particle number concentrations along Interstate 10 in El Paso, Texas
Olvera, Hector A.; Jimenez, Omar; Provencio-Vasquez, Elias
2014-01-01
Annual average daily particle number concentrations around a highway were estimated with an atmospheric dispersion model and a land use regression model. The dispersion model was used to estimate particle concentrations along Interstate 10 at 98 locations within El Paso, Texas. This model employed annual averaged wind speed and annual average daily traffic counts as inputs. A land use regression model with vehicle kilometers traveled as the predictor variable was used to estimate local background concentrations away from the highway to adjust the near-highway concentration estimates. Estimated particle number concentrations ranged between 9.8 × 103 particles/cc and 1.3 × 105 particles/cc, and averaged 2.5 × 104 particles/cc (SE 421.0). Estimates were compared against values measured at seven sites located along I10 throughout the region. The average fractional error was 6% and ranged between -1% and -13% across sites. The largest bias of -13% was observed at a semi-rural site where traffic was lowest. The average bias amongst urban sites was 5%. The accuracy of the estimates depended primarily on the emission factor and the adjustment to local background conditions. An emission factor of 1.63 × 1014 particles/veh-km was based on a value proposed in the literature and adjusted with local measurements. The integration of the two modeling techniques ensured that the particle number concentrations estimates captured the impact of traffic along both the highway and arterial roadways. The performance and economical aspects of the two modeling techniques used in this study shows that producing particle concentration surfaces along major roadways would be feasible in urban regions where traffic and meteorological data are readily available. PMID:25313294
Rahimi-Gorji, Mohammad; Gorji, Tahereh B; Gorji-Bandpy, Mofid
2016-07-01
In the present investigation, detailed two-phase flow modeling of airflow, transport and deposition of micro-particles (1-10µm) in a realistic tracheobronchial airway geometry based on CT scan images under various breathing conditions (i.e. 10-60l/min) was considered. Lagrangian particle tracking has been used to investigate the particle deposition patterns in a model comprising mouth up to generation G6 of tracheobronchial airways. The results demonstrated that during all breathing patterns, the maximum velocity change occurred in the narrow throat region (Larynx). Due to implementing a realistic geometry for simulations, many irregularities and bending deflections exist in the airways model. Thereby, at higher inhalation rates, these areas are prone to vortical effects which tend to entrap the inhaled particles. According to the results, deposition fraction has a direct relationship with particle aerodynamic diameter (for dp=1-10µm). Enhancing inhalation flow rate and particle size will largely increase the inertial force and consequently, more particle deposition is evident suggesting that inertial impaction is the dominant deposition mechanism in tracheobronchial airways. Copyright © 2016 Elsevier Ltd. All rights reserved.
A simple dynamic subgrid-scale model for LES of particle-laden turbulence
NASA Astrophysics Data System (ADS)
Park, George Ilhwan; Bassenne, Maxime; Urzay, Javier; Moin, Parviz
2017-04-01
In this study, a dynamic model for large-eddy simulations is proposed in order to describe the motion of small inertial particles in turbulent flows. The model is simple, involves no significant computational overhead, contains no adjustable parameters, and is flexible enough to be deployed in any type of flow solvers and grids, including unstructured setups. The approach is based on the use of elliptic differential filters to model the subgrid-scale velocity. The only model parameter, which is related to the nominal filter width, is determined dynamically by imposing consistency constraints on the estimated subgrid energetics. The performance of the model is tested in large-eddy simulations of homogeneous-isotropic turbulence laden with particles, where improved agreement with direct numerical simulation results is observed in the dispersed-phase statistics, including particle acceleration, local carrier-phase velocity, and preferential-concentration metrics.
Combined Experimental and Numerical Simulations of Thermal Barrier Coated Turbine Blades Erosion
NASA Technical Reports Server (NTRS)
Hamed, Awate; Tabakoff, Widen; Swar, Rohan; Shin, Dongyun; Woggon, Nthanial; Miller, Robert
2013-01-01
A combined experimental and computational study was conducted to investigate the erosion of thermal barrier coated (TBC) blade surfaces by alumina particles ingestion in a single stage turbine. In the experimental investigation, tests of particle surface interactions were performed in specially designed tunnels to determine the erosion rates and particle restitution characteristics under different impact conditions. The experimental results show that the erosion rates increase with increased impingement angle, impact velocity and temperature. In the computational simulations, an Euler-Lagrangian two stage approach is used in obtaining numerical solutions to the three-dimensional compressible Reynolds Averaged Navier-Stokes equations and the particles equations of motion in each blade passage reference frame. User defined functions (UDF) were developed to represent experimentally-based correlations for particle surface interaction models which were employed in the three-dimensional particle trajectory simulations to determine the particle rebound characteristics after each surface impact. The experimentally based erosion UDF model was used to predict the TBC erosion rates on the turbine blade surfaces based on the computed statistical data of the particles impact locations, velocities and angles relative to the blade surface. Computational results are presented for the predicted TBC blade erosion in a single stage commercial APU turbine, for a NASA designed automotive turbine, and for the NASA turbine scaled for modern rotorcraft operating conditions. The erosion patterns in the turbines are discussed for uniform particle ingestion and for particle ingestion concentrated in the inner and outer 5 percent of the stator blade span representing the flow cooling the combustor liner.
NASA Astrophysics Data System (ADS)
Song, Dongxing; Jin, Hui; Jing, Dengwei; Wang, Xin
2018-03-01
Aggregation and migration of colloidal particles under the thermal gradient widely exists in nature and many industrial processes. In this study, dynamic properties of polydisperse colloidal particles in the presence of thermal gradient were studied by a modified Brownian dynamic model. Other than the traditional forces on colloidal particles, including Brownian force, hydrodynamic force, and electrostatic force from other particles, the electrostatic force from the asymmetric ionic diffusion layer under a thermal gradient has been considered and introduced into the Brownian dynamic model. The aggregation ratio of particles (R A), the balance time (t B) indicating the time threshold when {{R}A} becomes constant, the porosity ({{P}BA} ), fractal dimension (D f) and distributions of concentration (DISC) and aggregation (DISA) for the aggregated particles were discussed based on this model. The aggregated structures formed by polydisperse particles are less dense and the particles therein are loosely bonded. Also it showed a quite large compressibility as the increases of concentration and interparticle potential can significantly increase the fractal dimension. The thermal gradient can induce two competitive factors leading to a two-stage migration of particles. When t<{{t}B} , the unsynchronized aggregation is dominant and the particles slightly migrate along the thermal gradient. When t>{{t}B} , the thermophoresis becomes dominant thus the migrations of particles are against the thermal gradient. The effect of thermophoresis on the aggregate structures was found to be similar to the effect of increasing particle concentration. This study demonstrates how the thermal gradient affects the aggregation of monodisperse and polydisperse particles and can be a guide for the biomimetics and precise control of colloid system under the thermal gradient. Moreover, our model can be easily extended to other more complex colloidal systems considering shear, temperature fluctuation, surfactant, etc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Oishik, E-mail: oishik-sen@uiowa.edu; Gaul, Nicholas J., E-mail: nicholas-gaul@ramdosolutions.com; Choi, K.K., E-mail: kyung-choi@uiowa.edu
Macro-scale computations of shocked particulate flows require closure laws that model the exchange of momentum/energy between the fluid and particle phases. Closure laws are constructed in this work in the form of surrogate models derived from highly resolved mesoscale computations of shock-particle interactions. The mesoscale computations are performed to calculate the drag force on a cluster of particles for different values of Mach Number and particle volume fraction. Two Kriging-based methods, viz. the Dynamic Kriging Method (DKG) and the Modified Bayesian Kriging Method (MBKG) are evaluated for their ability to construct surrogate models with sparse data; i.e. using the leastmore » number of mesoscale simulations. It is shown that if the input data is noise-free, the DKG method converges monotonically; convergence is less robust in the presence of noise. The MBKG method converges monotonically even with noisy input data and is therefore more suitable for surrogate model construction from numerical experiments. This work is the first step towards a full multiscale modeling of interaction of shocked particle laden flows.« less
Modeling and simulation of the debonding process of composite solid propellants
NASA Astrophysics Data System (ADS)
Feng, Tao; Xu, Jin-sheng; Han, Long; Chen, Xiong
2017-07-01
In order to study the damage evolution law of composite solid propellants, the molecular dynamics particle filled algorithm was used to establish the mesoscopic structure model of HTPB(Hydroxyl-terminated polybutadiene) propellants. The cohesive element method was employed for the adhesion interface between AP(Ammonium perchlorate) particle and HTPB matrix and the bilinear cohesive zone model was used to describe the mechanical response of the interface elements. The inversion analysis method based on Hooke-Jeeves optimization algorithm was employed to identify the parameters of cohesive zone model(CZM) of the particle/binder interface. Then, the optimized parameters were applied to the commercial finite element software ABAQUS to simulate the damage evolution process for AP particle and HTPB matrix, including the initiation, development, gathering and macroscopic crack. Finally, the stress-strain simulation curve was compared with the experiment curves. The result shows that the bilinear cohesive zone model can accurately describe the debonding and fracture process between the AP particles and HTPB matrix under the uniaxial tension loading.
Particle acceleration and transport at a 2D CME-driven shock using the HAFv3 and PATH Code
NASA Astrophysics Data System (ADS)
Li, G.; Ao, X.; Fry, C. D.; Verkhoglyadova, O. P.; Zank, G. P.
2012-12-01
We study particle acceleration at a 2D CME-driven shock and the subsequent transport in the inner heliosphere (up to 2 AU) by coupling the kinematic Hakamada-Akasofu-Fry version 3 (HAFv3) solar wind model (Hakamada and Akasofu, 1982, Fry et al. 2003) with the Particle Acceleration and Transport in the Heliosphere (PATH) model (Zank et al., 2000, Li et al., 2003, 2005, Verkhoglyadova et al. 2009). The HAFv3 provides the evolution of a two-dimensional shock geometry and other plasma parameters, which are fed into the PATH model to investigate the effect of a varying shock geometry on particle acceleration and transport. The transport module of the PATH model is parallelized and utilizes the state-of-the-art GPU computation technique to achieve a rapid physics-based numerical description of the interplanetary energetic particles. Together with a fast execution of the HAFv3 model, the coupled code gives us a possibility to nowcast/forecast the interplanetary radiation environment.
NASA Astrophysics Data System (ADS)
Septiani, Eka Lutfi; Widiyastuti, W.; Winardi, Sugeng; Machmudah, Siti; Nurtono, Tantular; Kusdianto
2016-02-01
Flame assisted spray dryer are widely uses for large-scale production of nanoparticles because of it ability. Numerical approach is needed to predict combustion and particles production in scale up and optimization process due to difficulty in experimental observation and relatively high cost. Computational Fluid Dynamics (CFD) can provide the momentum, energy and mass transfer, so that CFD more efficient than experiment due to time and cost. Here, two turbulence models, k-ɛ and Large Eddy Simulation were compared and applied in flame assisted spray dryer system. The energy sources for particle drying was obtained from combustion between LPG as fuel and air as oxidizer and carrier gas that modelled by non-premixed combustion in simulation. Silica particles was used to particle modelling from sol silica solution precursor. From the several comparison result, i.e. flame contour, temperature distribution and particle size distribution, Large Eddy Simulation turbulence model can provide the closest data to the experimental result.
The prediction of acoustical particle motion using an efficient polynomial curve fit procedure
NASA Technical Reports Server (NTRS)
Marshall, S. E.; Bernhard, R.
1984-01-01
A procedure is examined whereby the acoustic model parameters, natural frequencies and mode shapes, in the cavities of transportation vehicles are determined experimentally. The acoustic model shapes are described in terms of the particle motion. The acoustic modal analysis procedure is tailored to existing minicomputer based spectral analysis systems.
Let’s have a coffee with the Standard Model of particle physics!
NASA Astrophysics Data System (ADS)
Woithe, Julia; Wiener, Gerfried J.; Van der Veken, Frederik F.
2017-05-01
The Standard Model of particle physics is one of the most successful theories in physics and describes the fundamental interactions between elementary particles. It is encoded in a compact description, the so-called ‘Lagrangian’, which even fits on t-shirts and coffee mugs. This mathematical formulation, however, is complex and only rarely makes it into the physics classroom. Therefore, to support high school teachers in their challenging endeavour of introducing particle physics in the classroom, we provide a qualitative explanation of the terms of the Lagrangian and discuss their interpretation based on associated Feynman diagrams.
Particle drag history in a subcritical post-shock flow - data analysis method and uncertainty
NASA Astrophysics Data System (ADS)
Ding, Liuyang; Bordoloi, Ankur; Adrian, Ronald; Prestridge, Kathy; Arizona State University Team; Los Alamos National Laboratory Team
2017-11-01
A novel data analysis method for measuring particle drag in an 8-pulse particle tracking velocimetry-accelerometry (PTVA) experiment is described. We represented the particle drag history, CD(t) , using polynomials up to the third order. An analytical model for continuous particle position history was derived by integrating an equation relating CD(t) with particle velocity and acceleration. The coefficients of CD(t) were then calculated by fitting the position history model to eight measured particle locations in the sense of least squares. A preliminary test with experimental data showed that the new method yielded physically more reasonable particle velocity and acceleration history compared to conventionally adopted polynomial fitting. To fully assess and optimize the performance of the new method, we performed a PTVA simulation by assuming a ground truth of particle motion based on an ensemble of experimental data. The results indicated a significant reduction in the RMS error of CD. We also found that for particle locating noise between 0.1 and 3 pixels, a range encountered in our experiment, the lowest RMS error was achieved by using the quadratic CD(t) model. Furthermore, we will also discuss the optimization of the pulse timing configuration.
Artificial neural network based particle size prediction of polymeric nanoparticles.
Youshia, John; Ali, Mohamed Ehab; Lamprecht, Alf
2017-10-01
Particle size of nanoparticles and the respective polydispersity are key factors influencing their biopharmaceutical behavior in a large variety of therapeutic applications. Predicting these attributes would skip many preliminary studies usually required to optimize formulations. The aim was to build a mathematical model capable of predicting the particle size of polymeric nanoparticles produced by a pharmaceutical polymer of choice. Polymer properties controlling the particle size were identified as molecular weight, hydrophobicity and surface activity, and were quantified by measuring polymer viscosity, contact angle and interfacial tension, respectively. A model was built using artificial neural network including these properties as input with particle size and polydispersity index as output. The established model successfully predicted particle size of nanoparticles covering a range of 70-400nm prepared from other polymers. The percentage bias for particle prediction was 2%, 4% and 6%, for the training, validation and testing data, respectively. Polymer surface activity was found to have the highest impact on the particle size followed by viscosity and finally hydrophobicity. Results of this study successfully highlighted polymer properties affecting particle size and confirmed the usefulness of artificial neural networks in predicting the particle size and polydispersity of polymeric nanoparticles. Copyright © 2017 Elsevier B.V. All rights reserved.
Single-particle dispersion in stably stratified turbulence
NASA Astrophysics Data System (ADS)
Sujovolsky, N. E.; Mininni, P. D.; Rast, M. P.
2018-03-01
We present models for single-particle dispersion in vertical and horizontal directions of stably stratified flows. The model in the vertical direction is based on the observed Lagrangian spectrum of the vertical velocity, while the model in the horizontal direction is a combination of a continuous-time eddy-constrained random walk process with a contribution to transport from horizontal winds. Transport at times larger than the Lagrangian turnover time is not universal and dependent on these winds. The models yield results in good agreement with direct numerical simulations of stratified turbulence, for which single-particle dispersion differs from the well-studied case of homogeneous and isotropic turbulence.
A Morphological Approach to the Modeling of the Cold Spray Process
NASA Astrophysics Data System (ADS)
Delloro, F.; Jeandin, M.; Jeulin, D.; Proudhon, H.; Faessel, M.; Bianchi, L.; Meillot, E.; Helfen, L.
2017-12-01
A coating buildup model was developed, the aim of which was simulating the microstructure of a tantalum coating cold sprayed onto a copper substrate. To do so, first was operated a fine characterization of the irregular tantalum powder in 3D, using x-ray microtomography and developing specific image analysis algorithms. Particles were grouped by shape in seven classes. Afterward, 3D finite element simulations of the impact of the previously observed particles were realized. To finish, a coating buildup model was developed, based on the results of finite element simulations of particle impact. In its first version, this model is limited to 2D.
A model study of aggregates composed of spherical soot monomers with an acentric carbon shell
NASA Astrophysics Data System (ADS)
Luo, Jie; Zhang, Yongming; Zhang, Qixing
2018-01-01
Influences of morphology on the optical properties of soot particles have gained increasing attentions. However, studies on the effect of the way primary particles are coated on the optical properties is few. Aimed to understand how the primary particles are coated affect the optical properties of soot particles, the coated soot particle was simulated using the acentric core-shell monomers model (ACM), which was generated by randomly moving the cores of concentric core-shell monomers (CCM) model. Single scattering properties of the CCM model with identical fractal parameters were calculated 50 times at first to evaluate the optical diversities of different realizations of fractal aggregates with identical parameters. The results show that optical diversities of different realizations for fractal aggregates with identical parameters cannot be eliminated by averaging over ten random realizations. To preserve the fractal characteristics, 10 realizations of each model were generated based on the identical 10 parent fractal aggregates, and then the results were averaged over each 10 realizations, respectively. The single scattering properties of all models were calculated using the numerically exact multiple-sphere T-matrix (MSTM) method. It is found that the single scattering properties of randomly coated soot particles calculated using the ACM model are extremely close to those using CCM model and homogeneous aggregate (HA) model using Maxwell-Garnett effective medium theory. Our results are different from previous studies. The reason may be that the differences in previous studies were caused by fractal characteristics but not models. Our findings indicate that how the individual primary particles are coated has little effect on the single scattering properties of soot particles with acentric core-shell monomers. This work provides a suggestion for scattering model simplification and model selection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Podestà, M., E-mail: mpodesta@pppl.gov; Gorelenkova, M.; Fredrickson, E. D.
Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions.more » The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.« less
Kinetic Models for Topological Nearest-Neighbor Interactions
NASA Astrophysics Data System (ADS)
Blanchet, Adrien; Degond, Pierre
2017-12-01
We consider systems of agents interacting through topological interactions. These have been shown to play an important part in animal and human behavior. Precisely, the system consists of a finite number of particles characterized by their positions and velocities. At random times a randomly chosen particle, the follower, adopts the velocity of its closest neighbor, the leader. We study the limit of a system size going to infinity and, under the assumption of propagation of chaos, show that the limit kinetic equation is a non-standard spatial diffusion equation for the particle distribution function. We also study the case wherein the particles interact with their K closest neighbors and show that the corresponding kinetic equation is the same. Finally, we prove that these models can be seen as a singular limit of the smooth rank-based model previously studied in Blanchet and Degond (J Stat Phys 163:41-60, 2016). The proofs are based on a combinatorial interpretation of the rank as well as some concentration of measure arguments.
Limits on Momentum-Dependent Asymmetric Dark Matter with CRESST-II.
Angloher, G; Bento, A; Bucci, C; Canonica, L; Defay, X; Erb, A; Feilitzsch, F V; Ferreiro Iachellini, N; Gorla, P; Gütlein, A; Hauff, D; Jochum, J; Kiefer, M; Kluck, H; Kraus, H; Lanfranchi, J-C; Loebell, J; Münster, A; Pagliarone, C; Petricca, F; Potzel, W; Pröbst, F; Reindl, F; Schäffner, K; Schieck, J; Schönert, S; Seidel, W; Stodolsky, L; Strandhagen, C; Strauss, R; Tanzke, A; Trinh Thi, H H; Türkoğlu, C; Uffinger, M; Ulrich, A; Usherov, I; Wawoczny, S; Willers, M; Wüstrich, M; Zöller, A
2016-07-08
The usual assumption in direct dark matter searches is to consider only the spin-dependent or spin-independent scattering of dark matter particles. However, especially in models with light dark matter particles O(GeV/c^{2}), operators which carry additional powers of the momentum transfer q^{2} can become dominant. One such model based on asymmetric dark matter has been invoked to overcome discrepancies in helioseismology and an indication was found for a particle with a preferred mass of 3 GeV/c^{2} and a cross section of 10^{-37} cm^{2}. Recent data from the CRESST-II experiment, which uses cryogenic detectors based on CaWO_{4} to search for nuclear recoils induced by dark matter particles, are used to constrain these momentum-dependent models. The low energy threshold of 307 eV for nuclear recoils of the detector used, allows us to rule out the proposed best fit value above.
Ionization of the Earth's Upper Atmosphere in Large Energetic Particle Events
NASA Astrophysics Data System (ADS)
Wolff, E.; Burrows, J.; Kallenrode, M.; von Koenig, M.; Kuenzi, K. F.; Quack, M.
2001-12-01
Energetic charged particles ionize the upper terrestrial atmosphere. Sofar, chemical consequences of precipitating particles have been discussed for solar protons with energies up to a few hundred MeV. We present a refined model for the interaction of energetic particles with the atmosphere based on a Monte-Carlo simulation. The model includes higher energies and other particle species, such as energetic solar electrons. Results are presented for well-known solar events, such as July 14, 2000, and are extrapolated to extremely large events, such as Carrington's white light flare in 1859, which from ice cores has been identified ass the largest impulsive NO3 event in the interval 1561 -- 1994 (McCracken et al., 2001).
Influence of coal slurry particle composition on pipeline hydraulic transportation behavior
NASA Astrophysics Data System (ADS)
Li-an, Zhao; Ronghuan, Cai; Tieli, Wang
2018-02-01
Acting as a new type of energy transportation mode, the coal pipeline hydraulic transmission can reduce the energy transportation cost and the fly ash pollution of the conventional coal transportation. In this study, the effect of average velocity, particle size and pumping time on particle composition of coal particles during hydraulic conveying was investigated by ring tube test. Meanwhile, the effects of particle composition change on slurry viscosity, transmission resistance and critical sedimentation velocity were studied based on the experimental data. The experimental and theoretical analysis indicate that the alter of slurry particle composition can lead to the change of viscosity, resistance and critical velocity of slurry. Moreover, based on the previous studies, the critical velocity calculation model of coal slurry is proposed.
Force fields of charged particles in micro-nanofluidic preconcentration systems
NASA Astrophysics Data System (ADS)
Gong, Lingyan; Ouyang, Wei; Li, Zirui; Han, Jongyoon
2017-12-01
Electrokinetic concentration devices based on the ion concentration polarization (ICP) phenomenon have drawn much attention due to their simple setup, high enrichment factor, and easy integration with many subsequent processes, such as separation, reaction, and extraction etc. Despite significant progress in the experimental research, fundamental understanding and detailed modeling of the preconcentration systems is still lacking. The mechanism of the electrokinetic trapping of charged particles is currently limited to the force balance analysis between the electric force and fluid drag force in an over-simplified one-dimensional (1D) model, which misses many signatures of the actual system. This letter studies the particle trapping phenomena that are not explainable in the 1D model through the calculation of the two-dimensional (2D) force fields. The trapping of charged particles is shown to significantly distort the electric field and fluid flow pattern, which in turn leads to the different trapping behaviors of particles of different sizes. The mechanisms behind the protrusions and instability of the focused band, which are important factors determining overall preconcentration efficiency, are revealed through analyzing the rotating fluxes of particles in the vicinity of the ion-selective membrane. The differences in the enrichment factors of differently sized particles are understood through the interplay between the electric force and convective fluid flow. These results provide insights into the electrokinetic concentration effect, which could facilitate the design and optimization of ICP-based preconcentration systems.
Fish Passage though Hydropower Turbines: Simulating Blade Strike using the Discrete Element Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richmond, Marshall C.; Romero Gomez, Pedro DJ
mong the hazardous hydraulic conditions affecting anadromous and resident fish during their passage though turbine flows, two are believed to cause considerable injury and mortality: collision on moving blades and decompression. Several methods are currently available to evaluate these stressors in installed turbines, i.e. using live fish or autonomous sensor devices, and in reduced-scale physical models, i.e. registering collisions from plastic beads. However, a priori estimates with computational modeling approaches applied early in the process of turbine design can facilitate the development of fish-friendly turbines. In the present study, we evaluated the frequency of blade strike and nadir pressure environmentmore » by modeling potential fish trajectories with the Discrete Element Method (DEM) applied to fish-like composite particles. In the DEM approach, particles are subjected to realistic hydraulic conditions simulated with computational fluid dynamics (CFD), and particle-structure interactions—representing fish collisions with turbine blades—are explicitly recorded and accounted for in the calculation of particle trajectories. We conducted transient CFD simulations by setting the runner in motion and allowing for better turbulence resolution, a modeling improvement over the conventional practice of simulating the system in steady state which was also done here. While both schemes yielded comparable bulk hydraulic performance, transient conditions exhibited a visual improvement in describing flow variability. We released streamtraces (steady flow solution) and DEM particles (transient solution) at the same location from where sensor fish (SF) have been released in field studies of the modeled turbine unit. The streamtrace-based results showed a better agreement with SF data than the DEM-based nadir pressures did because the former accounted for the turbulent dispersion at the intake but the latter did not. However, the DEM-based strike frequency is more representative of blade-strike probability than the steady solution is, mainly because DEM particles accounted for the full fish length, thus resolving (instead of modeling) the collision event.« less
Van den Heuvel, Frank
2014-01-01
Purpose To present a closed formalism calculating charged particle radiation damage induced in DNA. The formalism is valid for all types of charged particles and due to its closed nature is suited to provide fast conversion of dose to DNA-damage. Methods The induction of double strand breaks in DNA–strings residing in irradiated cells is quantified using a single particle model. This leads to a proposal to use the cumulative Cauchy distribution to express the mix of high and low LET type damage probability generated by a single particle. A microscopic phenomenological Monte Carlo code is used to fit the parameters of the model as a function of kinetic energy related to the damage to a DNA molecule embedded in a cell. The model is applied for four particles: electrons, protons, alpha–particles, and carbon ions. A geometric interpretation of this observation using the impact ionization mean free path as a quantifier, allows extension of the model to very low energies. Results The mathematical expression describes the model adequately using a chi–square test (). This applies to all particle types with an almost perfect fit for protons, while the other particles seem to result in some discrepancies at very low energies. The implementation calculating a strict version of the RBE based on complex damage alone is corroborated by experimental data from the measured RBE. The geometric interpretation generates a unique dimensionless parameter for each type of charged particle. In addition, it predicts a distribution of DNA damage which is different from the current models. PMID:25340636
Predicting performance of polymer-bonded Terfenol-D composites under different magnetic fields
NASA Astrophysics Data System (ADS)
Guan, Xinchun; Dong, Xufeng; Ou, Jinping
2009-09-01
Considering demagnetization effect, the model used to calculate the magnetostriction of the single particle under the applied field is first created. Based on Eshelby equivalent inclusion and Mori-Tanaka method, the approach to calculate the average magnetostriction of the composites under any applied field, as well as the saturation, is studied by treating the magnetostriction particulate as an eigenstrain. The results calculated by the approach indicate that saturation magnetostriction of magnetostrictive composites increases with an increase of particle aspect and particle volume fraction, and a decrease of Young's modulus of the matrix. The influence of an applied field on magnetostriction of the composites becomes more significant with larger particle volume fraction or particle aspect. Experiments were done to verify the effectiveness of the model, the results of which indicate that the model only can provide approximate results.
A hand tracking algorithm with particle filter and improved GVF snake model
NASA Astrophysics Data System (ADS)
Sun, Yi-qi; Wu, Ai-guo; Dong, Na; Shao, Yi-zhe
2017-07-01
To solve the problem that the accurate information of hand cannot be obtained by particle filter, a hand tracking algorithm based on particle filter combined with skin-color adaptive gradient vector flow (GVF) snake model is proposed. Adaptive GVF and skin color adaptive external guidance force are introduced to the traditional GVF snake model, guiding the curve to quickly converge to the deep concave region of hand contour and obtaining the complex hand contour accurately. This algorithm realizes a real-time correction of the particle filter parameters, avoiding the particle drift phenomenon. Experimental results show that the proposed algorithm can reduce the root mean square error of the hand tracking by 53%, and improve the accuracy of hand tracking in the case of complex and moving background, even with a large range of occlusion.
Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.
Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard
2012-06-07
We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems.
Interactive Particle Visualization
NASA Astrophysics Data System (ADS)
Gribble, Christiaan P.
Particle-based simulation methods are used to model a wide range of complex phenomena and to solve time-dependent problems of various scales. Effective visualizations of the resulting state will communicate subtle changes in the three-dimensional structure, spatial organization, and qualitative trends within a simulation as it evolves. This chapter discusses two approaches to interactive particle visualization that satisfy these goals: one targeting desktop systems equipped with programmable graphics hardware, and the other targeting moderately sized multicore systems using packet-based ray tracing.
Microfluidic devices for modeling cell-cell and particle-cell interactions in the microvasculature
Prabhakarpandian, Balabhaskar; Shen, Ming-Che; Pant, Kapil; Kiani, Mohammad F.
2011-01-01
Cell-fluid and cell-cell interactions are critical components of many physiological and pathological conditions in the microvasculature. Similarly, particle-cell interactions play an important role in targeted delivery of therapeutics to tissue. Development of in vitro fluidic devices to mimic these microcirculatory processes has been a critical step forward in our understanding of the inflammatory process, development of nano-particulate drug carriers, and developing realistic in vitro models of the microvasculature and its surrounding tissue. However, widely used parallel plate flow based devices and assays have a number of important limitations for studying the physiological conditions in vivo. In addition, these devices are resource hungry and time consuming for performing various assays. Recently developed, more realistic, microfluidic based devices have been able to overcome many of these limitations. In this review, an overview of the fluidic devices and their use in studying the effects of shear forces on cell-cell and cell-particle interactions is presented. In addition, use of mathematical models and Computational Fluid Dynamics (CFD) based models for interpreting the complex flow patterns in the microvasculature are highlighted. Finally, the potential of 3D microfluidic devices and imaging for better representing in vivo conditions under which cell-cell and cell-particle interactions take place are discussed. PMID:21763328
NASA Astrophysics Data System (ADS)
Li, He-Ping; Chen, Jian; Guo, Heng; Jiang, Dong-Jun; Zhou, Ming-Sheng; Department of Engineering Physics Team
2017-10-01
Ion extraction from a plasma under an externally applied electric field involve multi-particle and multi-field interactions, and has wide applications in the fields of materials processing, etching, chemical analysis, etc. In order to develop the high-efficiency ion extraction methods, it is indispensable to establish a feasible model to understand the non-equilibrium transportation processes of the charged particles and the evolutions of the space charge sheath during the extraction process. Most of the previous studies on the ion extraction process are mainly based on the electron-equilibrium fluid model, which assumed that the electrons are in the thermodynamic equilibrium state. However, it may lead to some confusions with neglecting the electron movement during the sheath formation process. In this study, a non-electron-equilibrium model is established to describe the transportation of the charged particles in a parallel-plate ion extraction process. The numerical results show that the formation of the Child-Langmuir sheath is mainly caused by the charge separation. And thus, the sheath shielding effect will be significantly weakened if the charge separation is suppressed during the extraction process of the charged particles.
NASA Astrophysics Data System (ADS)
Alizadeh Behjani, Mohammadreza; Hassanpour, Ali; Ghadiri, Mojtaba; Bayly, Andrew
2017-06-01
Segregation of granules is an undesired phenomenon in which particles in a mixture separate from each other based on the differences in their physical and chemical properties. It is, therefore, crucial to control the homogeneity of the system by applying appropriate techniques. This requires a fundamental understanding of the underlying mechanisms. In this study, the effect of particle shape and cohesion has been analysed. As a model system prone to segregation, a ternary mixture of particles representing the common ingredients of home washing powders, namely, spray dried detergent powders, tetraacetylethylenediamine, and enzyme placebo (as the minor ingredient) during heap formation is modelled numerically by the Discrete Element Method (DEM) with an aim to investigate the effect of cohesion/adhesion of the minor components on segregation quality. Non-spherical particle shapes are created in DEM using the clumped-sphere method based on their X-ray tomograms. Experimentally, inter particle adhesion is generated by coating the minor ingredient (enzyme placebo) with Polyethylene Glycol 400 (PEG 400). The JKR theory is used to model the cohesion/adhesion of coated enzyme placebo particles in the simulation. Tests are carried out experimentally and simulated numerically by mixing the placebo particles (uncoated and coated) with the other ingredients and pouring them in a test box. The simulation and experimental results are compared qualitatively and quantitatively. It is found that coating the minor ingredient in the mixture reduces segregation significantly while the change in flowability of the system is negligible.
Lattice-Boltzmann-based simulations of diffusiophoresis of colloids and cells
NASA Astrophysics Data System (ADS)
Kreft Pearce, Jennifer; Castigliego, Joshua
Increasing environmental degradation due to plastic pollutants requires innovative solutions that facilitate the extraction of pollutants without harming local biota. We present results from a lattice-Boltzmann-base Brownian Dynamics simulation on diffusiophoresis and the separation of particles within the system. A gradient in viscosity that simulates a concentration gradient in a dissolved polymer allows us to separate various types of particles based on their deformability. As seen in previous experiments, simulated particles that have a higher deformability react differently to the polymer matrix than those with a lower deformability. Therefore, the particles can be separated from each other. The system described above was simulated with various concentration gradients as well as various Soret coefficients in order to optimize the separation of the particles. This simulation, in particular, was intended to model an oceanic system where the particles of interest were motile and nonmotile plankton and microplastics. The separation of plankton from the microplastics was achieved.
Nonlinear data assimilation using synchronization in a particle filter
NASA Astrophysics Data System (ADS)
Rodrigues-Pinheiro, Flavia; Van Leeuwen, Peter Jan
2017-04-01
Current data assimilation methods still face problems in strongly nonlinear cases. A promising solution is a particle filter, which provides a representation of the model probability density function by a discrete set of particles. However, the basic particle filter does not work in high-dimensional cases. The performance can be improved by considering the proposal density freedom. A potential choice of proposal density might come from the synchronisation theory, in which one tries to synchronise the model with the true evolution of a system using one-way coupling via the observations. In practice, an extra term is added to the model equations that damps growth of instabilities on the synchronisation manifold. When only part of the system is observed synchronization can be achieved via a time embedding, similar to smoothers in data assimilation. In this work, two new ideas are tested. First, ensemble-based time embedding, similar to an ensemble smoother or 4DEnsVar is used on each particle, avoiding the need for tangent-linear models and adjoint calculations. Tests were performed using Lorenz96 model for 20, 100 and 1000-dimension systems. Results show state-averaged synchronisation errors smaller than observation errors even in partly observed systems, suggesting that the scheme is a promising tool to steer model states to the truth. Next, we combine these efficient particles using an extension of the Implicit Equal-Weights Particle Filter, a particle filter that ensures equal weights for all particles, avoiding filter degeneracy by construction. Promising results will be shown on low- and high-dimensional Lorenz96 models, and the pros and cons of these new ideas will be discussed.
Controlling silk fibroin particle features for drug delivery
Lammel, Andreas; Hu, Xiao; Park, Sang-Hyug; Kaplan, David L.; Scheibel, Thomas
2010-01-01
Silk proteins are a promising material for drug delivery due to their aqueous processability, biocompatibility, and biodegradability. A simple aqueous preparation method for silk fibroin particles with controllable size, secondary structure and zeta potential is reported. The particles were produced by salting out a silk fibroin solution with potassium phosphate. The effect of ionic strength and pH of potassium phosphate solution on the yield and morphology of the particles was determined. Secondary structure and zeta potential of the silk particles could be controlled by pH. Particles produced by salting out with 1.25 M potassium phosphate pH 6 showed a dominating silk II (crystalline) structure whereas particles produced at pH 9 were mainly composed of silk I (less crystalline). The results show that silk I rich particles possess chemical and physical stability and secondary structure which remained unchanged during post treatments even upon exposure to 100% ethanol or methanol. A model is presented to explain the process of particle formation based on intra- and intermolecular interactions of the silk domains, influenced by pH and kosmotrope salts. The reported silk fibroin particles can be loaded with small molecule model drugs, such as alcian blue, rhodamine B, and crystal violet, by simple absorption based on electrostatic interactions. In vitro release of these compounds from the silk particles depends on charge – charge interactions between the compounds and the silk. With crystal violet we demonstrated that the release kinetics are dependent on the secondary structure of the particles. PMID:20219241
Uncertainty in simulated groundwater-quality trends in transient flow
Starn, J. Jeffrey; Bagtzoglou, Amvrossios; Robbins, Gary A.
2013-01-01
In numerical modeling of groundwater flow, the result of a given solution method is affected by the way in which transient flow conditions and geologic heterogeneity are simulated. An algorithm is demonstrated that simulates breakthrough curves at a pumping well by convolution-based particle tracking in a transient flow field for several synthetic basin-scale aquifers. In comparison to grid-based (Eulerian) methods, the particle (Lagrangian) method is better able to capture multimodal breakthrough caused by changes in pumping at the well, although the particle method may be apparently nonlinear because of the discrete nature of particle arrival times. Trial-and-error choice of number of particles and release times can perhaps overcome the apparent nonlinearity. Heterogeneous aquifer properties tend to smooth the effects of transient pumping, making it difficult to separate their effects in parameter estimation. Porosity, a new parameter added for advective transport, can be accurately estimated using both grid-based and particle-based methods, but predictions can be highly uncertain, even in the simple, nonreactive case.
NASA Astrophysics Data System (ADS)
Raitoharju, Matti; Nurminen, Henri; Piché, Robert
2015-12-01
Indoor positioning based on wireless local area network (WLAN) signals is often enhanced using pedestrian dead reckoning (PDR) based on an inertial measurement unit. The state evolution model in PDR is usually nonlinear. We present a new linear state evolution model for PDR. In simulated-data and real-data tests of tightly coupled WLAN-PDR positioning, the positioning accuracy with this linear model is better than with the traditional models when the initial heading is not known, which is a common situation. The proposed method is computationally light and is also suitable for smoothing. Furthermore, we present modifications to WLAN positioning based on Gaussian coverage areas and show how a Kalman filter using the proposed model can be used for integrity monitoring and (re)initialization of a particle filter.
Wijesiri, Buddhi; Egodawatta, Prasanna; McGree, James; Goonetilleke, Ashantha
2016-09-15
Accurate prediction of stormwater quality is essential for developing effective pollution mitigation strategies. The use of models incorporating simplified mathematical replications of pollutant processes is the common practice for determining stormwater quality. However, an inherent process uncertainty arises due to the intrinsic variability associated with pollutant processes, which has neither been comprehensively understood, nor well accounted for in uncertainty assessment of stormwater quality modelling. This review provides the context for defining and quantifying the uncertainty associated with pollutant build-up and wash-off on urban impervious surfaces based on the hypothesis that particle size is predominant in influencing process variability. Critical analysis of published research literature brings scientific evidence together in order to establish the fact that particle size changes with time, and different sized particles exhibit distinct behaviour during build-up and wash-off, resulting in process variability. Analysis of the different adsorption behaviour of particles confirmed that the variations in pollutant load and composition are influenced by particle size. Particle behaviour and variations in pollutant load and composition are related due to the strong affinity of pollutants such as heavy metals and hydrocarbons for specific particle size ranges. As such, the temporal variation in particle size is identified as the key to establishing a basis for assessing build-up and wash-off process uncertainty. Therefore, accounting for pollutant build-up and wash-off process variability, which is influenced by particle size, would facilitate the assessment of the uncertainty associated with modelling outcomes. Furthermore, the review identified fundamental knowledge gaps where further research is needed in relation to: (1) the aggregation of particles suspended in the atmosphere during build-up; (2) particle re-suspension during wash-off; (3) pollutant re-adsorption by different particle size fractions; and (4) development of evidence-based techniques for assessing uncertainty; and (5) methods for translating the knowledge acquired from the investigation of process mechanisms at small scale into catchment scale for stormwater quality modelling. Copyright © 2016 Elsevier Ltd. All rights reserved.
LINKING THE CMAQ AND HYSPLIT MODELING SYSTEM INTERFACE PROGRAM AND EXAMPLE APPLICATION
A new software tool has been developed to link the Eulerian-based Community Multiscale Air Quality (CMAQ) modeling system with the Lagrangian-based HYSPLIT (HYbrid Single-Particle Lagrangian Integrated Trajectory) model. Both models require many of the same hourly meteorological...
NASA Astrophysics Data System (ADS)
Manoylov, Anton; Lebon, Bruno; Djambazov, Georgi; Pericleous, Koulis
2017-11-01
The aerospace and automotive industries are seeking advanced materials with low weight yet high strength and durability. Aluminum and magnesium-based metal matrix composites with ceramic micro- and nano-reinforcements promise the desirable properties. However, larger surface-area-to-volume ratio in micro- and especially nanoparticles gives rise to van der Waals and adhesion forces that cause the particles to agglomerate in clusters. Such clusters lead to adverse effects on final properties, no longer acting as dislocation anchors but instead becoming defects. Also, agglomeration causes the particle distribution to become uneven, leading to inconsistent properties. To break up clusters, ultrasonic processing may be used via an immersed sonotrode, or alternatively via electromagnetic vibration. This paper combines a fundamental study of acoustic cavitation in liquid aluminum with a study of the interaction forces causing particles to agglomerate, as well as mechanisms of cluster breakup. A non-linear acoustic cavitation model utilizing pressure waves produced by an immersed horn is presented, and then applied to cavitation in liquid aluminum. Physical quantities related to fluid flow and quantities specific to the cavitation solver are passed to a discrete element method particles model. The coupled system is then used for a detailed study of clusters' breakup by cavitation.
Particle Swarm Social Adaptive Model for Multi-Agent Based Insurgency Warfare Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Xiaohui; Potok, Thomas E
2009-12-01
To better understand insurgent activities and asymmetric warfare, a social adaptive model for modeling multiple insurgent groups attacking multiple military and civilian targets is proposed and investigated. This report presents a pilot study using the particle swarm modeling, a widely used non-linear optimal tool to model the emergence of insurgency campaign. The objective of this research is to apply the particle swarm metaphor as a model of insurgent social adaptation for the dynamically changing environment and to provide insight and understanding of insurgency warfare. Our results show that unified leadership, strategic planning, and effective communication between insurgent groups are notmore » the necessary requirements for insurgents to efficiently attain their objective.« less
NASA Astrophysics Data System (ADS)
Nguyen, Ngoc Minh; Corff, Sylvain Le; Moulines, Éric
2017-12-01
This paper focuses on sequential Monte Carlo approximations of smoothing distributions in conditionally linear and Gaussian state spaces. To reduce Monte Carlo variance of smoothers, it is typical in these models to use Rao-Blackwellization: particle approximation is used to sample sequences of hidden regimes while the Gaussian states are explicitly integrated conditional on the sequence of regimes and observations, using variants of the Kalman filter/smoother. The first successful attempt to use Rao-Blackwellization for smoothing extends the Bryson-Frazier smoother for Gaussian linear state space models using the generalized two-filter formula together with Kalman filters/smoothers. More recently, a forward-backward decomposition of smoothing distributions mimicking the Rauch-Tung-Striebel smoother for the regimes combined with backward Kalman updates has been introduced. This paper investigates the benefit of introducing additional rejuvenation steps in all these algorithms to sample at each time instant new regimes conditional on the forward and backward particles. This defines particle-based approximations of the smoothing distributions whose support is not restricted to the set of particles sampled in the forward or backward filter. These procedures are applied to commodity markets which are described using a two-factor model based on the spot price and a convenience yield for crude oil data.
A patient-specific CFD-based study of embolic particle transport for stroke
NASA Astrophysics Data System (ADS)
Mukherjee, Debanjan; Shadden, Shawn C.
2014-11-01
Roughly 1/3 of all strokes are caused by an embolus traveling to a cerebral artery and blocking blood flow in the brain. A detailed understanding of the dynamics of embolic particles within arteries is the basis for this study. Blood flow velocities and emboli trajectories are resolved using a coupled Euler-Lagrange approach. Computer model of the major arteries is extracted from patient image data. Blood is modeled as a Newtonian fluid, discretized using the Finite Volume method, with physiologically appropriate inflow and outflow boundary conditions. The embolus trajectory is modeled using Lagrangian particle equations accounting for embolus interaction with blood as well as vessel wall. Both one and two way fluid-particle coupling are considered, the latter being implemented using momentum sources augmented to the discretized flow equations. The study determines individual embolus path up to arteries supplying the brain, and compares the size-dependent distribution of emboli amongst vessels superior to the aortic-arch, and the role of fully coupled blood-embolus interactions in modifying both trajectory and distribution when compared with one-way coupling. Specifically for intermediate particle sizes the model developed will better characterize the risks for embolic stroke. American Heart Association (AHA) Grant: Embolic Stroke: Anatomic and Physiologic Insights from Image-Based CFD.
Optimizing parameter of particle damping based on Leidenfrost effect of particle flows
NASA Astrophysics Data System (ADS)
Lei, Xiaofei; Wu, Chengjun; Chen, Peng
2018-05-01
Particle damping (PD) has strongly nonlinearity. With sufficiently vigorous vibration conditions, it always plays excellent damping performance and the particles which are filled into cavity are on Leidenfrost state considered in particle flow theory. For investigating the interesting phenomenon, the damping effect of PD on this state is discussed by the developed numerical model which is established based on principle of gas and solid. Furtherly, the numerical model is reformed and applied to study the relationship of Leidenfrost velocity with characteristic parameters of PD such as particle density, diameter, mass packing ratio and diameter-length ratio. The results indicate that particle density and mass packing ratio can drastically improve the damping performance as opposed as particle diameter and diameter-length ratio, mass packing ratio and diameter-length ratio can low the excited intensity for Leidenfrost state. For discussing the application of the phenomenon in engineering, bound optimization by quadratic approximation (BOBYQA) method is employed to optimize mass packing ratio of PD for minimize maximum amplitude (MMA) and minimize total vibration level (MTVL). It is noted that the particle damping can drastically reduce the vibrating amplitude for MMA as Leidenfrost velocity equal to the vibrating velocity relative to maximum vibration amplitude. For MTVL, larger mass packing ratio is best option because particles at relatively wide frequency range is adjacent to Leidenfrost state.
NASA Astrophysics Data System (ADS)
Xinyu-Tan; Duanming-Zhang; Shengqin-Feng; Li, Zhi-hua; Li, Guan; Li, Li; Dan, Liu
2006-05-01
The dynamics characteristic and effect of atoms and particulates ejected from the surface generated by nanosecond pulsed-laser ablation are very important. In this work, based on the consideration of the inelasticity and non-uniformity of the plasma particles thermally desorbed from a plane surface into vacuum induced by nanosecond laser ablation, the one-dimensional particles flow is studied on the basis of a quasi-molecular dynamics (QMD) simulation. It is assumed that atoms and particulates ejected from the surface of a target have a Maxwell velocity distribution corresponding to the surface temperature. Particles collisions in the ablation plume. The particles mass is continuous and satisfies fractal theory distribution. Meanwhile, the particles are inelastic. Our results show that inelasticity and non-uniformity strongly affect the dynamics behavior of the particles flow. Along with the decrease of restitution coefficient e and increase of fractional dimension D, velocity distributions of plasma particles system all deviate from the initial Gaussian distribution. The increasing of dissipation energy ΔE leads to density distribution clusterized and closed up to the center mass. Predictions of the particles action based on the proposed fractal and inelasticity model are found to be in agreement with the experimental observation. This verifies the validity of the present model for the dynamics behavior of pulsed-laser-induced particles flow.
The cohesive law of particle/binder interfaces in solid propellants
NASA Astrophysics Data System (ADS)
Tan, H.
2011-10-01
Solid propellants are treated as composites with high volume fraction of particles embedded in the polymeric binder. A micromechanics model is developed to establish the link between the microscopic behavior of particle/binder interfaces and the macroscopic constitutive information. This model is then used to determine the tension/shearing coupled interface cohesive law of a redesigned solid rocket motor propellant, based on the experimental data of the stress-strain and dilatation-strain curves for the material under slow rate uniaxial tension.
Capturing PM2.5 Emissions from 3D Printing via Nanofiber-based Air Filter.
Rao, Chengchen; Gu, Fu; Zhao, Peng; Sharmin, Nusrat; Gu, Haibing; Fu, Jianzhong
2017-09-04
This study investigated the feasibility of using polycaprolactone (PCL) nanofiber-based air filters to capture PM2.5 particles emitted from fused deposition modeling (FDM) 3D printing. Generation and aggregation of emitted particles were investigated under different testing environments. The results show that: (1) the PCL nanofiber membranes are capable of capturing particle emissions from 3D printing, (2) relative humidity plays a signification role in aggregation of the captured particles, (3) generation and aggregation of particles from 3D printing can be divided into four stages: the PM2.5 concentration and particles size increase slowly (first stage), small particles are continuously generated and their concentration increases rapidly (second stage), small particles aggregate into more large particles and the growth of concentration slows down (third stage), the PM2.5 concentration and particle aggregation sizes increase rapidly (fourth stage), and (4) the ultrafine particles denoted as "building unit" act as the fundamentals of the aggregated particles. This work has tremendous implications in providing measures for controlling the particle emissions from 3D printing, which would facilitate the extensive application of 3D printing. In addition, this study provides a potential application scenario for nanofiber-based air filters other than laboratory theoretical investigation.
Rethinking the Introduction of Particle Theory: A Substance-Based Framework
ERIC Educational Resources Information Center
Johnson, Philip; Papageorgiou, George
2010-01-01
In response to extensive research exposing students' poor understanding of the particle theory of matter, this article argues that the conceptual framework within which the theory is introduced could be a limiting factor. The standard school particle model is characterized as operating within a "solids, liquids, and gases" framework.…
Neuronal stress following exposure to 56Fe particles and the effects of antioxidant-rich diets
USDA-ARS?s Scientific Manuscript database
Exposing young rats to particles of high energy and charge (HZE particles), a ground-based model for exposure to cosmic rays, enhances indices of oxidative stress and inflammation and disrupts the functioning of neuronal communication in critical regions of the brain. These changes in neuronal funct...
Neuronal stress following exposure to 56Fe particles and the effects of antioxidant-rich diets
USDA-ARS?s Scientific Manuscript database
Exposing young rats to particles of high energy and charge (HZE particles), a ground-based model for exposure to cosmic rays, enhances indices of oxidative stress and inflammation and disrupts the functioning of neuronal communication in critical regions of the brain, similar to those seen in aging....
Internally electrodynamic particle model: Its experimental basis and its predictions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng-Johansson, J. X., E-mail: jxzj@iofpr.or
2010-03-15
The internally electrodynamic (IED) particle model was derived based on overall experimental observations, with the IED process itself being built directly on three experimental facts: (a) electric charges present with all material particles, (b) an accelerated charge generates electromagnetic waves according to Maxwell's equations and Planck energy equation, and (c) source motion produces Doppler effect. A set of well-known basic particle equations and properties become predictable based on first principles solutions for the IED process; several key solutions achieved are outlined, including the de Broglie phase wave, de Broglie relations, Schroedinger equation, mass, Einstein mass-energy relation, Newton's law of gravity,more » single particle self interference, and electromagnetic radiation and absorption; these equations and properties have long been broadly experimentally validated or demonstrated. A conditioned solution also predicts the Doebner-Goldin equation which emerges to represent a form of long-sought quantum wave equation including gravity. A critical review of the key experiments is given which suggests that the IED process underlies the basic particle equations and properties not just sufficiently but also necessarily.« less
Internally electrodynamic particle model: Its experimental basis and its predictions
NASA Astrophysics Data System (ADS)
Zheng-Johansson, J. X.
2010-03-01
The internally electrodynamic (IED) particle model was derived based on overall experimental observations, with the IED process itself being built directly on three experimental facts: (a) electric charges present with all material particles, (b) an accelerated charge generates electromagnetic waves according to Maxwell’s equations and Planck energy equation, and (c) source motion produces Doppler effect. A set of well-known basic particle equations and properties become predictable based on first principles solutions for the IED process; several key solutions achieved are outlined, including the de Broglie phase wave, de Broglie relations, Schrödinger equation, mass, Einstein mass-energy relation, Newton’s law of gravity, single particle self interference, and electromagnetic radiation and absorption; these equations and properties have long been broadly experimentally validated or demonstrated. A conditioned solution also predicts the Doebner-Goldin equation which emerges to represent a form of long-sought quantum wave equation including gravity. A critical review of the key experiments is given which suggests that the IED process underlies the basic particle equations and properties not just sufficiently but also necessarily.
A deterministic Lagrangian particle separation-based method for advective-diffusion problems
NASA Astrophysics Data System (ADS)
Wong, Ken T. M.; Lee, Joseph H. W.; Choi, K. W.
2008-12-01
A simple and robust Lagrangian particle scheme is proposed to solve the advective-diffusion transport problem. The scheme is based on relative diffusion concepts and simulates diffusion by regulating particle separation. This new approach generates a deterministic result and requires far less number of particles than the random walk method. For the advection process, particles are simply moved according to their velocity. The general scheme is mass conservative and is free from numerical diffusion. It can be applied to a wide variety of advective-diffusion problems, but is particularly suited for ecological and water quality modelling when definition of particle attributes (e.g., cell status for modelling algal blooms or red tides) is a necessity. The basic derivation, numerical stability and practical implementation of the NEighborhood Separation Technique (NEST) are presented. The accuracy of the method is demonstrated through a series of test cases which embrace realistic features of coastal environmental transport problems. Two field application examples on the tidal flushing of a fish farm and the dynamics of vertically migrating marine algae are also presented.
NASA Astrophysics Data System (ADS)
Eslami, Ghiyam; Esmaeilzadeh, Esmaeil; Pérez, Alberto T.
2016-10-01
Up and down motion of a spherical conductive particle in dielectric viscous fluid driven by a DC electric field between two parallel electrodes was investigated. A nonlinear differential equation, governing the particle dynamics, was derived, based on Newton's second law of mechanics, and solved numerically. All the pertaining dimensionless groups were extracted. In contrast to similar previous works, hydrodynamic interaction between the particle and the electrodes, as well as image electric forces, has been taken into account. Furthermore, the influence of the microdischarge produced between the electrodes and the approaching particle on the particle dynamics has been included in the model. The model results were compared with experimental data available in the literature, as well as with some additional experimental data obtained through the present study showing very good agreement. The results indicate that the wall hydrodynamic effect and the dielectric liquid ionic conductivity are very dominant factors determining the particle trajectory. A lower bound is derived for the charge transferred to the particle while rebounding from an electrode. It is found that the time and length scales of the post-microdischarge motion of the particle can be as small as microsecond and micrometer, respectively. The model is able to predict the so called settling/dwelling time phenomenon for the first time.
SEPEM: A tool for statistical modeling the solar energetic particle environment
NASA Astrophysics Data System (ADS)
Crosby, Norma; Heynderickx, Daniel; Jiggens, Piers; Aran, Angels; Sanahuja, Blai; Truscott, Pete; Lei, Fan; Jacobs, Carla; Poedts, Stefaan; Gabriel, Stephen; Sandberg, Ingmar; Glover, Alexi; Hilgers, Alain
2015-07-01
Solar energetic particle (SEP) events are a serious radiation hazard for spacecraft as well as a severe health risk to humans traveling in space. Indeed, accurate modeling of the SEP environment constitutes a priority requirement for astrophysics and solar system missions and for human exploration in space. The European Space Agency's Solar Energetic Particle Environment Modelling (SEPEM) application server is a World Wide Web interface to a complete set of cross-calibrated data ranging from 1973 to 2013 as well as new SEP engineering models and tools. Both statistical and physical modeling techniques have been included, in order to cover the environment not only at 1 AU but also in the inner heliosphere ranging from 0.2 AU to 1.6 AU using a newly developed physics-based shock-and-particle model to simulate particle flux profiles of gradual SEP events. With SEPEM, SEP peak flux and integrated fluence statistics can be studied, as well as durations of high SEP flux periods. Furthermore, effects tools are also included to allow calculation of single event upset rate and radiation doses for a variety of engineering scenarios.
A description of rotations for DEM models of particle systems
NASA Astrophysics Data System (ADS)
Campello, Eduardo M. B.
2015-06-01
In this work, we show how a vector parameterization of rotations can be adopted to describe the rotational motion of particles within the framework of the discrete element method (DEM). It is based on the use of a special rotation vector, called Rodrigues rotation vector, and accounts for finite rotations in a fully exact manner. The use of fictitious entities such as quaternions or complicated structures such as Euler angles is thereby circumvented. As an additional advantage, stick-slip friction models with inter-particle rolling motion are made possible in a consistent and elegant way. A few examples are provided to illustrate the applicability of the scheme. We believe that simple vector descriptions of rotations are very useful for DEM models of particle systems.
Laser Scattering from the Dense Plasma Focus.
plasma focus (DPF) illuminated by a pulse of laser light. Scattering was observable from 10 nanoseconds prior to arrival of the collapse on axis and for an additional 50 nanoseconds. The frequency spectrum is markedly asymmetric about the laser frequency, a feature which is inconsistent with spectral expectations based on thermal particle distributions even if particle drifts or waves excitations are included. A model is postulated which attributes the asymmetry to lateral displacement of scattering region from the axis of the focus. Analysis based on this model yields
Particle acceleration in a complex solar active region modelled by a Cellular automata model
NASA Astrophysics Data System (ADS)
Dauphin, C.; Vilmer, N.; Anastasiadis, A.
2004-12-01
The models of cellular automat allowed to reproduce successfully several statistical properties of the solar flares. We use a cellular automat model based on the concept of self-organised critical system to model the evolution of the magnetic energy released in an eruptive active area. Each burst of magnetic energy released is assimilated to a process of magnetic reconnection. We will thus generate several current layers (RCS) where the particles are accelerated by a direct electric field. We calculate the energy gain of the particles (ions and electrons) for various types of magnetic configuration. We calculate the distribution function of the kinetic energy of the particles after their interactions with a given number of RCS for each type of configurations. We show that the relative efficiency of the acceleration of the electrons and the ions depends on the selected configuration.
A discrete element method-based approach to predict the breakage of coal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, Varun; Sun, Xin; Xu, Wei
Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been determined by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments, with limited predictive capabilities for new coals and processes. Our work presents a Discrete Element Method (DEM)-based computational approach to model coal particle breakage with experimentally characterized coal physical properties. We also examined the effect of select operating parameters on the breakagemore » behavior of coal particles.« less
A discrete element method-based approach to predict the breakage of coal
Gupta, Varun; Sun, Xin; Xu, Wei; ...
2017-08-05
Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been determined by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments, with limited predictive capabilities for new coals and processes. Our work presents a Discrete Element Method (DEM)-based computational approach to model coal particle breakage with experimentally characterized coal physical properties. We also examined the effect of select operating parameters on the breakagemore » behavior of coal particles.« less
Lagrangian analysis of multiscale particulate flows with the particle finite element method
NASA Astrophysics Data System (ADS)
Oñate, Eugenio; Celigueta, Miguel Angel; Latorre, Salvador; Casas, Guillermo; Rossi, Riccardo; Rojek, Jerzy
2014-05-01
We present a Lagrangian numerical technique for the analysis of flows incorporating physical particles of different sizes. The numerical approach is based on the particle finite element method (PFEM) which blends concepts from particle-based techniques and the FEM. The basis of the Lagrangian formulation for particulate flows and the procedure for modelling the motion of small and large particles that are submerged in the fluid are described in detail. The numerical technique for analysis of this type of multiscale particulate flows using a stabilized mixed velocity-pressure formulation and the PFEM is also presented. Examples of application of the PFEM to several particulate flows problems are given.
Model calibration and validation for OFMSW and sewage sludge co-digestion reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Esposito, G., E-mail: giovanni.esposito@unicas.it; Frunzo, L., E-mail: luigi.frunzo@unina.it; Panico, A., E-mail: anpanico@unina.it
2011-12-15
Highlights: > Disintegration is the limiting step of the anaerobic co-digestion process. > Disintegration kinetic constant does not depend on the waste particle size. > Disintegration kinetic constant depends only on the waste nature and composition. > The model calibration can be performed on organic waste of any particle size. - Abstract: A mathematical model has recently been proposed by the authors to simulate the biochemical processes that prevail in a co-digestion reactor fed with sewage sludge and the organic fraction of municipal solid waste. This model is based on the Anaerobic Digestion Model no. 1 of the International Watermore » Association, which has been extended to include the co-digestion processes, using surface-based kinetics to model the organic waste disintegration and conversion to carbohydrates, proteins and lipids. When organic waste solids are present in the reactor influent, the disintegration process is the rate-limiting step of the overall co-digestion process. The main advantage of the proposed modeling approach is that the kinetic constant of such a process does not depend on the waste particle size distribution (PSD) and rather depends only on the nature and composition of the waste particles. The model calibration aimed to assess the kinetic constant of the disintegration process can therefore be conducted using organic waste samples of any PSD, and the resulting value will be suitable for all the organic wastes of the same nature as the investigated samples, independently of their PSD. This assumption was proven in this study by biomethane potential experiments that were conducted on organic waste samples with different particle sizes. The results of these experiments were used to calibrate and validate the mathematical model, resulting in a good agreement between the simulated and observed data for any investigated particle size of the solid waste. This study confirms the strength of the proposed model and calibration procedure, which can thus be used to assess the treatment efficiency and predict the methane production of full-scale digesters.« less
NASA Astrophysics Data System (ADS)
Martelloni, Gianluca; Bagnoli, Franco; Guarino, Alessio
2017-09-01
We present a three-dimensional model of rain-induced landslides, based on cohesive spherical particles. The rainwater infiltration into the soil follows either the fractional or the fractal diffusion equations. We analytically solve the fractal partial differential equation (PDE) for diffusion with particular boundary conditions to simulate a rainfall event. We developed a numerical integration scheme for the PDE, compared with the analytical solution. We adapt the fractal diffusion equation obtaining the gravimetric water content that we use as input of a triggering scheme based on Mohr-Coulomb limit-equilibrium criterion. This triggering is then complemented by a standard molecular dynamics algorithm, with an interaction force inspired by the Lennard-Jones potential, to update the positions and velocities of particles. We present our results for homogeneous and heterogeneous systems, i.e., systems composed by particles with same or different radius, respectively. Interestingly, in the heterogeneous case, we observe segregation effects due to the different volume of the particles. Finally, we analyze the parameter sensibility both for the triggering and the propagation phases. Our simulations confirm the results of a previous two-dimensional model and therefore the feasible applicability to real cases.
Theory and modeling of particles with DNA-mediated interactions
NASA Astrophysics Data System (ADS)
Licata, Nicholas A.
2008-05-01
In recent years significant attention has been attracted to proposals which utilize DNA for nanotechnological applications. Potential applications of these ideas range from the programmable self-assembly of colloidal crystals, to biosensors and nanoparticle based drug delivery platforms. In Chapter I we introduce the system, which generically consists of colloidal particles functionalized with specially designed DNA markers. The sequence of bases on the DNA markers determines the particle type. Due to the hybridization between complementary single-stranded DNA, specific, type-dependent interactions can be introduced between particles by choosing the appropriate DNA marker sequences. In Chapter II we develop a statistical mechanical description of the aggregation and melting behavior of particles with DNA-mediated interactions. In Chapter III a model is proposed to describe the dynamical departure and diffusion of particles which form reversible key-lock connections. In Chapter IV we propose a method to self-assemble nanoparticle clusters using DNA scaffolds. A natural extension is discussed in Chapter V, the programmable self-assembly of nanoparticle clusters where the desired cluster geometry is encoded using DNA-mediated interactions. In Chapter VI we consider a nanoparticle based drug delivery platform for targeted, cell specific chemotherapy. In Chapter VII we present prospects for future research: the connection between DNA-mediated colloidal crystallization and jamming, and the inverse problem in self-assembly.
Adsorption of acids and bases from aqueous solutions onto silicon dioxide particles.
Zengin, Huseyin; Erkan, Belgin
2009-12-30
The adsorption of acids and bases onto the surface of silicon dioxide (SiO(2)) particles was systematically studied as a function of several variables, including activation conditions, contact time, specific surface area, particle size, concentration and temperature. The physical properties of SiO(2) particles were investigated, where characterizations were carried out by FT-IR spectroscopy, and morphology was examined by scanning electron microscopy (SEM). The SEM of samples showed good dispersion and uniform SiO(2) particles with an average diameter of about 1-1.5 microm. The adsorption results revealed that SiO(2) surfaces possessed effective interactions with acids and bases, and greatest adsorption capacity was achieved with NaOH, where the best fit isotherm model was the Freundlich adsorption model. The adsorption properties of raw SiO(2) particles were further improved by ultrasonication. Langmuir monolayer adsorption capacity of NaOH adsorbate at 25 degrees C on sonicated SiO(2) (182.6 mg/g) was found to be greater than that of the unsonicated SiO(2) (154.3mg/g). The spontaneity of the adsorption process was established by decreases in DeltaG(ads)(0), which varied from -10.5 to -13.6 kJ mol(-1), in the temperature range 283-338K.
Zhu, Shupeng; Sartelet, Karine N; Healy, Robert M; Wenger, John C
2016-07-18
Air quality models are used to simulate and forecast pollutant concentrations, from continental scales to regional and urban scales. These models usually assume that particles are internally mixed, i.e. particles of the same size have the same chemical composition, which may vary in space and time. Although this assumption may be realistic for continental-scale simulations, where particles originating from different sources have undergone sufficient mixing to achieve a common chemical composition for a given model grid cell and time, it may not be valid for urban-scale simulations, where particles from different sources interact on shorter time scales. To investigate the role of the mixing state assumption on the formation of particles, a size-composition resolved aerosol model (SCRAM) was developed and coupled to the Polyphemus air quality platform. Two simulations, one with the internal mixing hypothesis and another with the external mixing hypothesis, have been carried out for the period 15 January to 11 February 2010, when the MEGAPOLI winter field measurement campaign took place in Paris. The simulated bulk concentrations of chemical species and the concentrations of individual particle classes are compared with the observations of Healy et al. (Atmos. Chem. Phys., 2013, 13, 9479-9496) for the same period. The single particle diversity and the mixing-state index are computed based on the approach developed by Riemer et al. (Atmos. Chem. Phys., 2013, 13, 11423-11439), and they are compared to the measurement-based analyses of Healy et al. (Atmos. Chem. Phys., 2014, 14, 6289-6299). The average value of the single particle diversity, which represents the average number of species within each particle, is consistent between simulation and measurement (2.91 and 2.79 respectively). Furthermore, the average value of the mixing-state index is also well represented in the simulation (69% against 59% from the measurements). The spatial distribution of the mixing-state index shows that the particles are not mixed in urban areas, while they are well mixed in rural areas. This indicates that the assumption of internal mixing traditionally used in transport chemistry models is well suited to rural areas, but this assumption is less realistic for urban areas close to emission sources.
NASA Astrophysics Data System (ADS)
Prabhu, T. Ram
2016-08-01
A wear model is developed based on the discrete lattice spring-mass approach to study the effects of particle volume fraction, size, and stiffness on the wear resistance of particle reinforced composites. To study these effects, we have considered three volume fractions (10%, 20% and 30%), two sizes (10 × 10 and 4 × 4 sites), and two different stiffness of particles embedded in the matrix in a regular pattern. In this model, we have discretized the composite system (400 × 100 sites) into the lumped masses connected with interaction spring elements in two dimensions. The interaction elements are assumed as linear elastic and ideal plastic under applied forces. Each mass is connected to its first and second nearest neighbors by springs. The matrix and particles sites are differentiated by choosing the different stiffness values. The counter surface is simulated as a rigid body that moves on the composite material at a constant sliding speed along the horizontal direction. The governing equations are formed by equating the spring force between the pair of sites given by Hooke’s law plus external contact forces and the force due to the motion of the site given by the equation of motion. The equations are solved for the plastic strain accumulated in the springs using an explicit time stepping procedure based on a finite difference form of the above equations. If the total strain accumulated in the spring elements connected to a lump mass site exceeds the failure strain, the springs are considered to be broken, and the mass site is removed or worn away from the lattice and accounts as a wear loss. The model predicts that (i) increasing volume fraction, reducing particle size and increasing particle stiffness enhance the wear resistance of the particle reinforced composites, (ii) the particle stiffness is the most significant factor affecting the wear resistance of the composites, and (iii) the wear resistance reduced above the critical volume fraction (Vc), and Vc increases with increasing particle size. Finally, we have qualitatively compared the model results with our previously published experimental results to prove the effectiveness of the model to analysis the complex wear systems.
Sawakuchi, Gabriel O; Yukihara, Eduardo G
2012-01-21
The objective of this work is to test analytical models to calculate the luminescence efficiency of Al(2)O(3):C optically stimulated luminescence detectors (OSLDs) exposed to heavy charged particles with energies relevant to space dosimetry and particle therapy. We used the track structure model to obtain an analytical expression for the relative luminescence efficiency based on the average radial dose distribution produced by the heavy charged particle. We compared the relative luminescence efficiency calculated using seven different radial dose distribution models, including a modified model introduced in this work, with experimental data. The results obtained using the modified radial dose distribution function agreed within 20% with experimental data from Al(2)O(3):C OSLDs relative luminescence efficiency for particles with atomic number ranging from 1 to 54 and linear energy transfer in water from 0.2 up to 1368 keV µm(-1). In spite of the significant improvement over other radial dose distribution models, understanding of the underlying physical processes associated with these radial dose distribution models remain elusive and may represent a limitation of the track structure model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Podesta, M.; Gorelenkova, M.; Fredrickson, E. D.
Here, integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities,ad-hocmodels can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. Themore » kick model implemented in the tokamaktransport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.« less
Particle-Size-Grouping Model of Precipitation Kinetics in Microalloyed Steels
NASA Astrophysics Data System (ADS)
Xu, Kun; Thomas, Brian G.
2012-03-01
The formation, growth, and size distribution of precipitates greatly affects the microstructure and properties of microalloyed steels. Computational particle-size-grouping (PSG) kinetic models based on population balances are developed to simulate precipitate particle growth resulting from collision and diffusion mechanisms. First, the generalized PSG method for collision is explained clearly and verified. Then, a new PSG method is proposed to model diffusion-controlled precipitate nucleation, growth, and coarsening with complete mass conservation and no fitting parameters. Compared with the original population-balance models, this PSG method saves significant computation and preserves enough accuracy to model a realistic range of particle sizes. Finally, the new PSG method is combined with an equilibrium phase fraction model for plain carbon steels and is applied to simulate the precipitated fraction of aluminum nitride and the size distribution of niobium carbide during isothermal aging processes. Good matches are found with experimental measurements, suggesting that the new PSG method offers a promising framework for the future development of realistic models of precipitation.
NASA Astrophysics Data System (ADS)
Dellino, Pierfrancesco; Büttner, Ralf; Dioguardi, Fabio; Doronzo, Domenico Maria; La Volpe, Luigi; Mele, Daniela; Sonder, Ingo; Sulpizio, Roberto; Zimanowski, Bernd
2010-05-01
Pyroclastic flows are ground hugging, hot, gas-particle flows. They represent the most hazardous events of explosive volcanism, one striking example being the famous historical eruption of Pompeii (AD 79) at Vesuvius. Much of our knowledge on the mechanics of pyroclastic flows comes from theoretical models and numerical simulations. Valuable data are also stored in the geological record of past eruptions, i.e. the particles contained in pyroclastic deposits, but they are rarely used for quantifying the destructive potential of pyroclastic flows. In this paper, by means of experiments, we validate a model that is based on data from pyroclastic deposits. It allows the reconstruction of the current's fluid-dynamic behaviour. We show that our model results in likely values of dynamic pressure and particle volumetric concentration, and allows quantifying the hazard potential of pyroclastic flows.
Significance of the model considering mixed grain-size for inverse analysis of turbidites
NASA Astrophysics Data System (ADS)
Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.
2016-12-01
A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The uniform grain-size model often reaches to local optimum condition that is significantly different from true solution. In conclusion, we propose a method of optimization based on the model considering mixed grain-size particles, and show its application to examples of turbidites in the Kiyosumi Formation, Boso Peninsula, Japan.
NASA Astrophysics Data System (ADS)
Cousquer, Yohann; Pryet, Alexandre; Atteia, Olivier; Ferré, Ty P. A.; Delbart, Célestine; Valois, Rémi; Dupuy, Alain
2018-03-01
The inverse problem of groundwater models is often ill-posed and model parameters are likely to be poorly constrained. Identifiability is improved if diverse data types are used for parameter estimation. However, some models, including detailed solute transport models, are further limited by prohibitive computation times. This often precludes the use of concentration data for parameter estimation, even if those data are available. In the case of surface water-groundwater (SW-GW) models, concentration data can provide SW-GW mixing ratios, which efficiently constrain the estimate of exchange flow, but are rarely used. We propose to reduce computational limits by simulating SW-GW exchange at a sink (well or drain) based on particle tracking under steady state flow conditions. Particle tracking is used to simulate advective transport. A comparison between the particle tracking surrogate model and an advective-dispersive model shows that dispersion can often be neglected when the mixing ratio is computed for a sink, allowing for use of the particle tracking surrogate model. The surrogate model was implemented to solve the inverse problem for a real SW-GW transport problem with heads and concentrations combined in a weighted hybrid objective function. The resulting inversion showed markedly reduced uncertainty in the transmissivity field compared to calibration on head data alone.
Elucidating determinants of aerosol composition through particle-type-based receptor modeling
NASA Astrophysics Data System (ADS)
McGuire, M. L.; Jeong, C.-H.; Slowik, J. G.; Chang, R. Y.-W.; Corbin, J. C.; Lu, G.; Mihele, C.; Rehbein, P. J. G.; Sills, D. M. L.; Abbatt, J. P. D.; Brook, J. R.; Evans, G. J.
2011-03-01
An aerosol time-of-flight mass spectrometer (ATOFMS) was deployed at a semi-rural site in Southern Ontario to characterize the size and chemical composition of individual particles. Particle-type-based receptor modelling of these data was used to investigate the determinants of aerosol chemical composition in this region. Individual particles were classified into particle-types and positive matrix factorization (PMF) was applied to their temporal trends to separate and cross-apportion particle-types to factors. The extent of chemical processing for each factor was assessed by evaluating the internal and external mixing state of the characteristic particle-types. The nine factors identified helped to elucidate the coupled interactions of these determinants. Nitrate-laden dust was found to be the dominant type of locally emitted particles measured by ATOFMS. Several factors associated with aerosol transported to the site from intermediate local-to-regional distances were identified: the Organic factor was associated with a combustion source to the north-west; the ECOC Day factor was characterized by nearby local-to-regional carbonaceous emissions transported from the south-west during the daytime; and the Fireworks factor consisted of pyrotechnic particles from the Detroit region following holiday fireworks displays. Regional aerosol from farther emissions sources were reflected through three factors: two biomass burning factors and a highly chemically processed long range transport factor. The biomass burning factors were separated by PMF due to differences in chemical processing which were caused in part by the passage of two thunderstorm gust fronts with different air mass histories. The remaining two factors, ECOC Night and Nitrate Background, represented the night-time partitioning of nitrate to pre-existing particles of different origins. The distinct meteorological conditions observed during this month-long study in the summer of 2007 provided a unique range of temporal variability, enabling the elucidation of the determinants of aerosol chemical composition, including source emissions, chemical processing, and transport, at the Canada-US border. This paper presents the first study to characterize the coupled influences of these determinants on temporal variability in aerosol chemical composition using single particle-type-based receptor modelling.
Elucidating determinants of aerosol composition through particle-type-based receptor modeling
NASA Astrophysics Data System (ADS)
McGuire, M. L.; Jeong, C.-H.; Slowik, J. G.; Chang, R. Y.-W.; Corbin, J. C.; Lu, G.; Mihele, C.; Rehbein, P. J. G.; Sills, D. M. L.; Abbatt, J. P. D.; Brook, J. R.; Evans, G. J.
2011-08-01
An aerosol time-of-flight mass spectrometer (ATOFMS) was deployed at a semi-rural site in southern Ontario to characterize the size and chemical composition of individual particles. Particle-type-based receptor modelling of these data was used to investigate the determinants of aerosol chemical composition in this region. Individual particles were classified into particle-types and positive matrix factorization (PMF) was applied to their temporal trends to separate and cross-apportion particle-types to factors. The extent of chemical processing for each factor was assessed by evaluating the internal and external mixing state of the characteristic particle-types. The nine factors identified helped to elucidate the coupled interactions of these determinants. Nitrate-laden dust was found to be the dominant type of locally emitted particles measured by ATOFMS. Several factors associated with aerosol transported to the site from intermediate local-to-regional distances were identified: the Organic factor was associated with a combustion source to the north-west; the ECOC Day factor was characterized by nearby local-to-regional carbonaceous emissions transported from the south-west during the daytime; and the Fireworks factor consisted of pyrotechnic particles from the Detroit region following holiday fireworks displays. Regional aerosol from farther emissions sources was reflected through three factors: two Biomass Burning factors and a highly chemically processed Long Range Transport factor. The Biomass Burning factors were separated by PMF due to differences in chemical processing which were in part elucidated by the passage of two thunderstorm gust fronts with different air mass histories. The remaining two factors, ECOC Night and Nitrate Background, represented the night-time partitioning of nitrate to pre-existing particles of different origins. The distinct meteorological conditions observed during this month-long study in the summer of 2007 provided a unique range of temporal variability, enabling the elucidation of the determinants of aerosol chemical composition, including source emissions, chemical processing, and transport, at the Canada-US border. This paper presents the first study to elucidate the coupled influences of these determinants on temporal variability in aerosol chemical composition using single particle-type-based receptor modelling.
McMullin, Brian T; Leung, Ming-Ying; Shanbhag, Arun S; McNulty, Donald; Mabrey, Jay D; Agrawal, C Mauli
2006-02-01
A total of 750 images of individual ultra-high molecular weight polyethylene (UHMWPE) particles isolated from periprosthetic failed hip, knee, and shoulder arthroplasties were extracted from archival scanning electron micrographs. Particle size and morphology was subsequently analyzed using computerized image analysis software utilizing five descriptors found in ASTM F1877-98, a standard for quantitative description of wear debris. An online survey application was developed to display particle images, and allowed ten respondents to classify particle morphologies according to commonly used terminology as fibers, flakes, or granules. Particles were categorized based on a simple majority of responses. All descriptors were evaluated using a one-way ANOVA and Tukey-Kramer test for all-pairs comparison among each class of particles. A logistic regression model using half of the particles included in the survey was then used to develop a mathematical scheme to predict whether a given particle should be classified as a fiber, flake, or granule based on its quantitative measurements. The validity of the model was then assessed using the other half of the survey particles and compared with human responses. Comparison of the quantitative measurements of isolated particles showed that the morphologies of each particle type classified by respondents were statistically different from one another (p<0.05). The average agreement between mathematical prediction and human respondents was 83.5% (standard error 0.16%). These data suggest that computerized descriptors can be feasibly correlated with subjective terminology, thus providing a basis for a common vocabulary for particle description which can be translated into quantitative dimensions.
McMullin, Brian T.; Leung, Ming-Ying; Shanbhag, Arun S.; McNulty, Donald; Mabrey, Jay D.; Agrawal, C. Mauli
2014-01-01
A total of 750 images of individual ultra-high molecular weight polyethylene (UHMWPE) particles isolated from periprosthetic failed hip, knee, and shoulder arthroplasties were extracted from archival scanning electron micrographs. Particle size and morphology was subsequently analyzed using computerized image analysis software utilizing five descriptors found in ASTM F1877-98, a standard for quantitative description of wear debris. An online survey application was developed to display particle images, and allowed ten respondents to classify particle morphologies according to commonly used terminology as fibers, flakes, or granules. Particles were categorized based on a simple majority of responses. All descriptors were evaluated using a one-way ANOVA and Tukey–Kramer test for all-pairs comparison among each class of particles. A logistic regression model using half of the particles included in the survey was then used to develop a mathematical scheme to predict whether a given particle should be classified as a fiber, flake, or granule based on its quantitative measurements. The validity of the model was then assessed using the other half of the survey particles and compared with human responses. Comparison of the quantitative measurements of isolated particles showed that the morphologies of each particle type classified by respondents were statistically different from one another (po0:05). The average agreement between mathematical prediction and human respondents was 83.5% (standard error 0.16%). These data suggest that computerized descriptors can be feasibly correlated with subjective terminology, thus providing a basis for a common vocabulary for particle description which can be translated into quantitative dimensions. PMID:16112725
Exact simulation of polarized light reflectance by particle deposits
NASA Astrophysics Data System (ADS)
Ramezan Pour, B.; Mackowski, D. W.
2015-12-01
The use of polarimetric light reflection measurements as a means of identifying the physical and chemical characteristics of particulate materials obviously relies on an accurate model of predicting the effects of particle size, shape, concentration, and refractive index on polarized reflection. The research examines two methods for prediction of reflection from plane parallel layers of wavelength—sized particles. The first method is based on an exact superposition solution to Maxwell's time harmonic wave equations for a deposit of spherical particles that are exposed to a plane incident wave. We use a FORTRAN-90 implementation of this solution (the Multiple Sphere T Matrix (MSTM) code), coupled with parallel computational platforms, to directly simulate the reflection from particle layers. The second method examined is based upon the vector radiative transport equation (RTE). Mie theory is used in our RTE model to predict the extinction coefficient, albedo, and scattering phase function of the particles, and the solution of the RTE is obtained from adding—doubling method applied to a plane—parallel configuration. Our results show that the MSTM and RTE predictions of the Mueller matrix elements converge when particle volume fraction in the particle layer decreases below around five percent. At higher volume fractions the RTE can yield results that, depending on the particle size and refractive index, significantly depart from the exact predictions. The particle regimes which lead to dependent scattering effects, and the application of methods to correct the vector RTE for particle interaction, will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarker, M. R. I., E-mail: islamrabiul@yahoo.com; Saha, Manabendra, E-mail: manabendra.saha@adelaide.edu.au, E-mail: manab04me@gmail.com; Beg, R. A.
A recirculating flow solar particle cavity absorber (receiver) is modeled to investigate the flow behavior and heat transfer characteristics of a novel developing concept. It features a continuous recirculating flow of non-reacting metallic particles (black silicon carbide) with air which are used as a thermal enhancement medium. The aim of the present study is to numerically investigate the thermal behavior and flow characteristics of the proposed concept. The proposed solar particle receiver is modeled using two phase discrete particle model (DPM), RNG k-flow model and discrete ordinate (DO) radiation model. Numerical analysis is carried out considering a solar receiver withmore » only air and the mixture of non-reacting particles and air as a heat transfer as well as heat carrying medium. The parametric investigation is conducted considering the incident solar flux on the receiver aperture and changing air flow rate and recirculation rate inside the receiver. A stand-alone feature of the recirculating flow solar particle receiver concept is that the particles are directly exposed to concentrated solar radiation monotonously through recirculating flow inside the receiver and results in efficient irradiation absorption and convective heat transfer to air that help to achieve high temperature air and consequently increase in thermal efficiency. This paper presents, results from the developed concept and highlights its flow behavior and potential to enhance the heat transfer from metallic particles to air by maximizing heat carrying capacity of the heat transfer medium. The imposed milestones for the present system will be helpful to understand the radiation absorption mechanism of the particles in a recirculating flow based receiver, the thermal transport between the particles, the air and the cavity, and the fluid dynamics of the air and particle in the cavity.« less
Wiggins, Paul A
2015-07-21
This article describes the application of a change-point algorithm to the analysis of stochastic signals in biological systems whose underlying state dynamics consist of transitions between discrete states. Applications of this analysis include molecular-motor stepping, fluorophore bleaching, electrophysiology, particle and cell tracking, detection of copy number variation by sequencing, tethered-particle motion, etc. We present a unified approach to the analysis of processes whose noise can be modeled by Gaussian, Wiener, or Ornstein-Uhlenbeck processes. To fit the model, we exploit explicit, closed-form algebraic expressions for maximum-likelihood estimators of model parameters and estimated information loss of the generalized noise model, which can be computed extremely efficiently. We implement change-point detection using the frequentist information criterion (which, to our knowledge, is a new information criterion). The frequentist information criterion specifies a single, information-based statistical test that is free from ad hoc parameters and requires no prior probability distribution. We demonstrate this information-based approach in the analysis of simulated and experimental tethered-particle-motion data. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Couvidat, F.; Sartelet, K.
2014-01-01
The Secondary Organic Aerosol Processor (SOAP v1.0) model is presented. This model is designed to be modular with different user options depending on the computing time and the complexity required by the user. This model is based on the molecular surrogate approach, in which each surrogate compound is associated with a molecular structure to estimate some properties and parameters (hygroscopicity, absorption on the aqueous phase of particles, activity coefficients, phase separation). Each surrogate can be hydrophilic (condenses only on the aqueous phase of particles), hydrophobic (condenses only on the organic phase of particles) or both (condenses on both the aqueous and the organic phases of particles). Activity coefficients are computed with the UNIFAC thermodynamic model for short-range interactions and with the AIOMFAC parameterization for medium and long-range interactions between electrolytes and organic compounds. Phase separation is determined by Gibbs energy minimization. The user can choose between an equilibrium and a dynamic representation of the organic aerosol. In the equilibrium representation, compounds in the particle phase are assumed to be at equilibrium with the gas phase. However, recent studies show that the organic aerosol (OA) is not at equilibrium with the gas phase because the organic phase could be semi-solid (very viscous liquid phase). The condensation or evaporation of organic compounds could then be limited by the diffusion in the organic phase due to the high viscosity. A dynamic representation of secondary organic aerosols (SOA) is used with OA divided into layers, the first layer at the center of the particle (slowly reaches equilibrium) and the final layer near the interface with the gas phase (quickly reaches equilibrium).
A new dynamical atmospheric ionizing radiation (AIR) model for epidemiological studies
NASA Technical Reports Server (NTRS)
De Angelis, G.; Clem, J. M.; Goldhagen, P. E.; Wilson, J. W.
2003-01-01
A new Atmospheric Ionizing Radiation (AIR) model is currently being developed for use in radiation dose evaluation in epidemiological studies targeted to atmospheric flight personnel such as civilian airlines crewmembers. The model will allow computing values for biologically relevant parameters, e.g. dose equivalent and effective dose, for individual flights from 1945. Each flight is described by its actual three dimensional flight profile, i.e. geographic coordinates and altitudes varying with time. Solar modulated primary particles are filtered with a new analytical fully angular dependent geomagnetic cut off rigidity model, as a function of latitude, longitude, arrival direction, altitude and time. The particle transport results have been obtained with a technique based on the three-dimensional Monte Carlo transport code FLUKA, with a special procedure to deal with HZE particles. Particle fluxes are transformed into dose-related quantities and then integrated all along the flight path to obtain the overall flight dose. Preliminary validations of the particle transport technique using data from the AIR Project ER-2 flight campaign of measurements are encouraging. Future efforts will deal with modeling of the effects of the aircraft structure as well as inclusion of solar particle events. Published by Elsevier Ltd on behalf of COSPAR.
Modeling of Particle Emission During Dry Orthogonal Cutting
NASA Astrophysics Data System (ADS)
Khettabi, Riad; Songmene, Victor; Zaghbani, Imed; Masounave, Jacques
2010-08-01
Because of the risks associated with exposure to metallic particles, efforts are being put into controlling and reducing them during the metal working process. Recent studies by the authors involved in this project have presented the effects of cutting speeds, workpiece material, and tool geometry on particle emission during dry machining; the authors have also proposed a new parameter, named the dust unit ( D u), for use in evaluating the quantity of particle emissions relative to the quantity of chips produced during a machining operation. In this study, a model for predicting the particle emission (dust unit) during orthogonal turning is proposed. This model, which is based on the energy approach combined with the microfriction and the plastic deformation of the material, takes into account the tool geometry, the properties of the worked material, the cutting conditions, and the chip segmentation. The model is validated using experimental results obtained during the orthogonal turning of 6061-T6 aluminum alloy, AISI 1018, AISI 4140 steels, and grey cast iron. A good agreement was found with experimental results. This model can help in designing strategies for reducing particle emission during machining processes, at the source.
NASA Astrophysics Data System (ADS)
Nar, Sevda Yeliz; Cakir, Altan
2018-02-01
Particles produced by nuclear decay, cosmic radiation and reactions can be identified through various methods. One of these methods that has been effective in the last century is the cloud chamber. The chamber makes visible cosmic particles that we are exposed to radiation per second. Diffusion cloud chamber is a kind of cloud chamber that is cooled by dry ice. This traditional model has some application difficulties. In this work, Peltier-based cloud chamber cooled by thermoelectric modules is studied. The new model provided uniformly cooled base of the chamber, moreover, it has longer lifetime than the traditional chamber in terms of observation time. This gain has reduced the costs which spent each time for cosmic particle observation. The chamber is an easy-to-use system according to traditional diffusion cloud chamber. The new model is portable, easier to make, and can be used in the nuclear physics experiments. In addition, it would be very useful to observe Muons which are the direct evidence for Lorentz contraction and time expansion predicted by Einsteins special relativity principle.
Symmetry breaking in occupation number based slave-particle methods
NASA Astrophysics Data System (ADS)
Georgescu, Alexandru B.; Ismail-Beigi, Sohrab
2017-10-01
We describe a theoretical approach to finding spontaneously symmetry-broken electronic phases due to strong electronic interactions when using recently developed slave-particle (slave-boson) approaches based on occupation numbers. We describe why, to date, spontaneous symmetry breaking has proven difficult to achieve in such approaches. We then provide a total energy based approach for introducing auxiliary symmetry-breaking fields into the solution of the slave-particle problem that leads to lowered total energies for symmetry-broken phases. We point out that not all slave-particle approaches yield energy lowering: the slave-particle model being used must explicitly describe the degrees of freedom that break symmetry. Finally, our total energy approach permits us to greatly simplify the formalism used to achieve a self-consistent solution between spinon and slave modes while increasing the numerical stability and greatly speeding up the calculations.
Optical modeling of volcanic ash particles using ellipsoids
NASA Astrophysics Data System (ADS)
Merikallio, Sini; Muñoz, Olga; Sundström, Anu-Maija; Virtanen, Timo H.; Horttanainen, Matti; de Leeuw, Gerrit; Nousiainen, Timo
2015-05-01
The single-scattering properties of volcanic ash particles are modeled here by using ellipsoidal shapes. Ellipsoids are expected to improve the accuracy of the retrieval of aerosol properties using remote sensing techniques, which are currently often based on oversimplified assumptions of spherical ash particles. Measurements of the single-scattering optical properties of ash particles from several volcanoes across the globe, including previously unpublished measurements from the Eyjafjallajökull and Puyehue volcanoes, are used to assess the performance of the ellipsoidal particle models. These comparisons between the measurements and the ellipsoidal particle model include consideration of the whole scattering matrix, as well as sensitivity studies on the point of view of the Advanced Along Track Scanning Radiometer (AATSR) instrument. AATSR, which flew on the ENVISAT satellite, offers two viewing directions but no information on polarization, so usually only the phase function is relevant for interpreting its measurements. As expected, ensembles of ellipsoids are able to reproduce the observed scattering matrix more faithfully than spheres. Performance of ellipsoid ensembles depends on the distribution of particle shapes, which we tried to optimize. No single specific shape distribution could be found that would perform superiorly in all situations, but all of the best-fit ellipsoidal distributions, as well as the additionally tested equiprobable distribution, improved greatly over the performance of spheres. We conclude that an equiprobable shape distribution of ellipsoidal model particles is a relatively good, yet enticingly simple, approach for modeling volcanic ash single-scattering optical properties.
Particle Interactions Mediated by Dynamical Networks: Assessment of Macroscopic Descriptions
NASA Astrophysics Data System (ADS)
Barré, J.; Carrillo, J. A.; Degond, P.; Peurichard, D.; Zatorska, E.
2018-02-01
We provide a numerical study of the macroscopic model of Barré et al. (Multiscale Model Simul, 2017, to appear) derived from an agent-based model for a system of particles interacting through a dynamical network of links. Assuming that the network remodeling process is very fast, the macroscopic model takes the form of a single aggregation-diffusion equation for the density of particles. The theoretical study of the macroscopic model gives precise criteria for the phase transitions of the steady states, and in the one-dimensional case, we show numerically that the stationary solutions of the microscopic model undergo the same phase transitions and bifurcation types as the macroscopic model. In the two-dimensional case, we show that the numerical simulations of the macroscopic model are in excellent agreement with the predicted theoretical values. This study provides a partial validation of the formal derivation of the macroscopic model from a microscopic formulation and shows that the former is a consistent approximation of an underlying particle dynamics, making it a powerful tool for the modeling of dynamical networks at a large scale.
Particle Interactions Mediated by Dynamical Networks: Assessment of Macroscopic Descriptions.
Barré, J; Carrillo, J A; Degond, P; Peurichard, D; Zatorska, E
2018-01-01
We provide a numerical study of the macroscopic model of Barré et al. (Multiscale Model Simul, 2017, to appear) derived from an agent-based model for a system of particles interacting through a dynamical network of links. Assuming that the network remodeling process is very fast, the macroscopic model takes the form of a single aggregation-diffusion equation for the density of particles. The theoretical study of the macroscopic model gives precise criteria for the phase transitions of the steady states, and in the one-dimensional case, we show numerically that the stationary solutions of the microscopic model undergo the same phase transitions and bifurcation types as the macroscopic model. In the two-dimensional case, we show that the numerical simulations of the macroscopic model are in excellent agreement with the predicted theoretical values. This study provides a partial validation of the formal derivation of the macroscopic model from a microscopic formulation and shows that the former is a consistent approximation of an underlying particle dynamics, making it a powerful tool for the modeling of dynamical networks at a large scale.
A discrete model of Ostwald ripening based on multiple pairwise interactions
NASA Astrophysics Data System (ADS)
Di Nunzio, Paolo Emilio
2018-06-01
A discrete multi-particle model of Ostwald ripening based on direct pairwise interactions is developed for particles with incoherent interfaces as an alternative to the classical LSW mean field theory. The rate of matter exchange depends on the average surface-to-surface interparticle distance, a characteristic feature of the system which naturally incorporates the effect of volume fraction of second phase. The multi-particle diffusion is described through the definition of an interaction volume containing all the particles involved in the exchange of solute. At small volume fractions this is proportional to the size of the central particle, at higher volume fractions it gradually reduces as a consequence of diffusion screening described on a geometrical basis. The topological noise present in real systems is also included. For volume fractions below about 0.1 the model predicts broad and right-skewed stationary size distributions resembling a lognormal function. Above this value, a transition to sharper, more symmetrical but still right-skewed shapes occurs. An excellent agreement with experiments is obtained for 3D particle size distributions of solid-solid and solid-liquid systems with volume fraction 0.07, 0.30, 0.52 and 0.74. The kinetic constant of the model depends on the cube root of volume fraction up to about 0.1, then increases rapidly with an upward concavity. It is in good agreement with the available literature data on solid-liquid mixtures in the volume fraction range from 0.20 to about 0.75.
Mean field dynamics of some open quantum systems
NASA Astrophysics Data System (ADS)
Merkli, Marco; Rafiyi, Alireza
2018-04-01
We consider a large number N of quantum particles coupled via a mean field interaction to another quantum system (reservoir). Our main result is an expansion for the averages of observables, both of the particles and of the reservoir, in inverse powers of √{N }. The analysis is based directly on the Dyson series expansion of the propagator. We analyse the dynamics, in the limit N →∞ , of observables of a fixed number n of particles, of extensive particle observables and their fluctuations, as well as of reservoir observables. We illustrate our results on the infinite mode Dicke model and on various energy-conserving models.
Design of Particulate-Reinforced Composite Materials
Muc, Aleksander; Barski, Marek
2018-01-01
A microstructure-based model is developed to study the effective anisotropic properties (magnetic, dielectric or thermal) of two-phase particle-filled composites. The Green’s function technique and the effective field method are used to theoretically derive the homogenized (averaged) properties for a representative volume element containing isolated inclusion and infinite, chain-structured particles. Those results are compared with the finite element approximations conducted for the assumed representative volume element. In addition, the Maxwell–Garnett model is retrieved as a special case when particle interactions are not considered. We also give some information on the optimal design of the effective anisotropic properties taking into account the shape of magnetic particles. PMID:29401678
Mean field dynamics of some open quantum systems.
Merkli, Marco; Rafiyi, Alireza
2018-04-01
We consider a large number N of quantum particles coupled via a mean field interaction to another quantum system (reservoir). Our main result is an expansion for the averages of observables, both of the particles and of the reservoir, in inverse powers of [Formula: see text]. The analysis is based directly on the Dyson series expansion of the propagator. We analyse the dynamics, in the limit [Formula: see text], of observables of a fixed number n of particles, of extensive particle observables and their fluctuations, as well as of reservoir observables. We illustrate our results on the infinite mode Dicke model and on various energy-conserving models.
NASA Astrophysics Data System (ADS)
Matoušek, Václav; Kesely, Mikoláš; Vlasák, Pavel
2018-06-01
The deposition velocity is an important operation parameter in hydraulic transport of solid particles in pipelines. It represents flow velocity at which transported particles start to settle out at the bottom of the pipe and are no longer transported. A number of predictive models has been developed to determine this threshold velocity for slurry flows of different solids fractions (fractions of different grain size and density). Most of the models consider flow in a horizontal pipe only, modelling approaches for inclined flows are extremely scarce due partially to a lack of experimental information about the effect of pipe inclination on the slurry flow pattern and behaviour. We survey different approaches to modelling of particle deposition in flowing slurry and discuss mechanisms on which deposition-limit models are based. Furthermore, we analyse possibilities to incorporate the effect of flow inclination into the predictive models and select the most appropriate ones based on their ability to modify the modelled deposition mechanisms to conditions associated with the flow inclination. A usefulness of the selected modelling approaches and their modifications are demonstrated by comparing model predictions with experimental results for inclined slurry flows from our own laboratory and from the literature.
Ion-acoustic shocks with reflected ions: modelling and particle-in-cell simulations
NASA Astrophysics Data System (ADS)
Liseykina, T. V.; Dudnikova, G. I.; Vshivkov, V. A.; Malkov, M. A.
2015-10-01
> Non-relativistic collisionless shock waves are widespread in space and astrophysical plasmas and are known as efficient particle accelerators. However, our understanding of collisionless shocks, including their structure and the mechanisms whereby they accelerate particles, remains incomplete. We present here the results of numerical modelling of an ion-acoustic collisionless shock based on the one-dimensional kinetic approximation for both electrons and ions with a real mass ratio. Special emphasis is paid to the shock-reflected ions as the main driver of shock dissipation. The reflection efficiency, the velocity distribution of reflected particles and the shock electrostatic structure are studied in terms of the shock parameters. Applications to particle acceleration in geophysical and astrophysical shocks are discussed.
Higgs Boson: god particle or divine comedy?
NASA Astrophysics Data System (ADS)
Rangacharyulu, Chary
2013-10-01
While particle physicists around the world rejoice the announcement of discovery of Higgs particle as a momentous event, it is also an opportune moment to assess the physicists' conception of nature. Particle theorists, in their ingenious efforts to unravel mysteries of the physical universe at a very fundamental level, resort to macroscopic many body theoretical methods of solid state physicists. Their efforts render the universe a superconductor of correlated quasi-particle pairs. Experimentalists, devoted to ascertain the elementary constituents and symmetries, depend heavily on numerical simulations based on those models and conform to theoretical slang in planning and interpretation of measurements . It is to the extent that the boundaries between theory/modeling and experiment are blurred. Is it possible that they are meandering in Dante's Inferno?
Kinetics and Mechanisms of γ′ Reprecipitation in a Ni-based Superalloy
Masoumi, F.; Shahriari, D.; Jahazi, M.; Cormier, J.; Devaux, A.
2016-01-01
The reprecipitation mechanisms and kinetics of γ′ particles during cooling from supersolvus and subsolvus temperatures were studied in AD730TM Ni-based superalloy using Differential Thermal Analysis (DTA). The evolution in the morphology and distribution of reprecipitated γ′ particles was investigated using Field Emission Gun Scanning Electron Microscopy (FEG-SEM). Depending on the cooling rate, γ′ particles showed multi or monomodal distribution. The irregularity growth characteristics observed at lower cooling rates were analyzed in the context of Mullins and Sekerka theory, and allowed the determination of a critical size of γ′ particles above which morphological instability appears. Precipitation kinetics parameters were determined using a non-isothermal JMA model and DTA data. The Avrami exponent was determined to be in the 1.5–2.3 range, suggesting spherical or irregular growth. A methodology was developed to take into account the temperature dependence of the rate coefficient k(T) in the non-isothermal JMA equation. In that regard, a function for k(T) was developed. Based on the results obtained, reprecipitation kinetics models for low and high cooling rates are proposed to quantify and predict the volume fraction of reprecipitated γ′ particles during the cooling process. PMID:27338868
NASA Astrophysics Data System (ADS)
Ambroglini, Filippo; Jerome Burger, William; Battiston, Roberto; Vitale, Vincenzo; Zhang, Yu
2014-05-01
During last decades, few space experiments revealed anomalous bursts of charged particles, mainly electrons with energy larger than few MeV. A possible source of these bursts are the low-frequency seismo-electromagnetic emissions, which can cause the precipitation of the electrons from the lower boundary of their inner belt. Studies of these bursts reported also a short-term pre-seismic excess. Starting from simulation tools traditionally used on high energy physics we developed a dedicated application SEPS (Space Perturbation Earthquake Simulation), based on the Geant4 tool and PLANETOCOSMICS program, able to model and simulate the electromagnetic interaction between the earthquake and the particles trapped in the inner Van Allen belt. With SEPS one can study the transport of particles trapped in the Van Allen belts through the Earth's magnetic field also taking into account possible interactions with the Earth's atmosphere. SEPS provides the possibility of: testing different models of interaction between electromagnetic waves and trapped particles, defining the mechanism of interaction as also shaping the area in which this takes place,assessing the effects of perturbations in the magnetic field on the particles path, performing back-tracking analysis and also modelling the interaction with electric fields. SEPS is in advanced development stage, so that it could be already exploited to test in details the results of correlation analysis between particle bursts and earthquakes based on NOAA and SAMPEX data. The test was performed both with a full simulation analysis, (tracing from the position of the earthquake and going to see if there were paths compatible with the burst revealed) and with a back-tracking analysis (tracing from the burst detection point and checking the compatibility with the position of associated earthquake).
Flocking from a quantum analogy: spin-orbit coupling in an active fluid
NASA Astrophysics Data System (ADS)
Loewe, Benjamin; Souslov, Anton; Goldbart, Paul M.
2018-01-01
Systems composed of strongly interacting self-propelled particles can form a spontaneously flowing polar active fluid. The study of the connection between the microscopic dynamics of a single such particle and the macroscopic dynamics of the fluid can yield insights into experimentally realizable active flows, but this connection is well understood in only a few select cases. We introduce a model of self-propelled particles based on an analogy with the motion of electrons that have strong spin-orbit coupling. We find that, within our model, self-propelled particles are subject to an analog of the Heisenberg uncertainty principle that relates translational and rotational noise. Furthermore, by coarse-graining this microscopic model, we establish expressions for the coefficients of the Toner-Tu equations—the hydrodynamic equations that describe an active fluid composed of these ‘active spins.’ The connection between stochastic self-propelled particles and quantum particles with spin may help realize exotic phases of matter using active fluids via analogies with systems composed of strongly correlated electrons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Po-Yen; Chen, Liu; Institute for Fusion Theory and Simulation, Zhejiang University, 310027 Hangzhou
2015-09-15
The thermal relaxation time of a one-dimensional plasma has been demonstrated to scale with N{sub D}{sup 2} due to discrete particle effects by collisionless particle-in-cell (PIC) simulations, where N{sub D} is the particle number in a Debye length. The N{sub D}{sup 2} scaling is consistent with the theoretical analysis based on the Balescu-Lenard-Landau kinetic equation. However, it was found that the thermal relaxation time is anomalously shortened to scale with N{sub D} while externally introducing the Krook type collision model in the one-dimensional electrostatic PIC simulation. In order to understand the discrete particle effects enhanced by the Krook type collisionmore » model, the superposition principle of dressed test particles was applied to derive the modified Balescu-Lenard-Landau kinetic equation. The theoretical results are shown to be in good agreement with the simulation results when the collisional effects dominate the plasma system.« less
Use of mucolytics to enhance magnetic particle retention at a model airway surface
NASA Astrophysics Data System (ADS)
Ally, Javed; Roa, Wilson; Amirfazli, A.
A previous study has shown that retention of magnetic particles at a model airway surface requires prohibitively strong magnetic fields. As mucus viscoelasticity is the most significant factor contributing to clearance of magnetic particles from the airway surface, mucolytics are considered in this study to reduce mucus viscoelasticity and enable particle retention with moderate strength magnetic fields. The excised frog palate model was used to simulate the airway surface. Two mucolytics, N-acetylcysteine (NAC) and dextran sulfate (DS) were tested. NAC was found to enable retention at moderate field values (148 mT with a gradient of 10.2 T/m), whereas DS was found to be effective only for sufficiently large particle concentrations at the airway surface. The possible mechanisms for the observed behavior with different mucolytics are also discussed based on aggregate formation and the loading of cilia.
Semianalytical computation of path lines for finite-difference models
Pollock, D.W.
1988-01-01
A semianalytical particle tracking method was developed for use with velocities generated from block-centered finite-difference ground-water flow models. Based on the assumption that each directional velocity component varies linearly within a grid cell in its own coordinate directions, the method allows an analytical expression to be obtained describing the flow path within an individual grid cell. Given the intitial position of a particle anywhere in a cell, the coordinates of any other point along its path line within the cell, and the time of travel between them, can be computed directly. For steady-state systems, the exit point for a particle entering a cell at any arbitrary location can be computed in a single step. By following the particle as it moves from cell to cell, this method can be used to trace the path of a particle through any multidimensional flow field generated from a block-centered finite-difference flow model. -Author
The role of particle collisions in pneumatic transport
NASA Technical Reports Server (NTRS)
Mastorakos, E.; Louge, M.; Jenkins, J. T.
1989-01-01
A model of dilute gas-solid flow in vertical risers is developed in which the particle phase is treated as a granular material, the balance equations for rapid granular flow are modified to incorporate the drag force from the gas, and boundary conditions, based on collisional exchanges of momentum and energy at the wall, are employed. In this model, it is assumed that the particle fluctuations are determined by inter-particle collisions only and that the turbulence of the gas is unaffected by the presence of the particles. The model is developed in the context of, but not limited to, steady, fully developed flow. A numerical solution of the resulting governing equations provides concentration profiles generally observed in dilute pneumatic flow, velocity profiles in good agreement with the measurements of Tsuji, et al. (1984), and an explanation for the enhancement of turbulence that they observed.
Numerical Study on the Particle Trajectory Tracking in a Micro-UV Bio-Fluorescence Sensor.
Byeon, Sun-Seok; Cho, Moon-Young; Lee, Jong-Chul; Kim, Youn-Jea
2015-03-01
A micro-UV bio-fluorescence sensor was developed to detect primary biological aerosols including bacteria, bacterial spores, fungal spores, pollens, viruses, algae, etc. In order to effectively detect the bio-particles in a micro-UV bio-fluorescence sensor, numerical calculations were performed to adjust for appropriate flow conditions of the sensor by regulating the sample aerosols and sheath flow. In particular, a CFD-based model of hydrodynamic processes was developed by computing the trajectory of particles using commercially available ANSYS CFX-14 software and the Lagrangian tracking model. The established model was evaluated with regard to the variation of sheath flow rate and particle size. Results showed that the sheath flow was changed rapidly at the end of nozzle tip, but the sample particles moved near the center of aerosol jet for aerodynamic focusing with little deviation from the axis.
NASA Astrophysics Data System (ADS)
Edwards, L. L.; Harvey, T. F.; Freis, R. P.; Pitovranov, S. E.; Chernokozhin, E. V.
1992-10-01
The accuracy associated with assessing the environmental consequences of an accidental release of radioactivity is highly dependent on our knowledge of the source term characteristics and, in the case when the radioactivity is condensed on particles, the particle size distribution, all of which are generally poorly known. This paper reports on the development of a numerical technique that integrates the radiological measurements with atmospheric dispersion modeling. This results in a more accurate particle-size distribution and particle injection height estimation when compared with measurements of high explosive dispersal of (239)Pu. The estimation model is based on a non-linear least squares regression scheme coupled with the ARAC three-dimensional atmospheric dispersion models. The viability of the approach is evaluated by estimation of ADPIC model input parameters such as the ADPIC particle size mean aerodynamic diameter, the geometric standard deviation, and largest size. Additionally we estimate an optimal 'coupling coefficient' between the particles and an explosive cloud rise model. The experimental data are taken from the Clean Slate 1 field experiment conducted during 1963 at the Tonopah Test Range in Nevada. The regression technique optimizes the agreement between the measured and model predicted concentrations of (239)Pu by varying the model input parameters within their respective ranges of uncertainties. The technique generally estimated the measured concentrations within a factor of 1.5, with the worst estimate being within a factor of 5, very good in view of the complexity of the concentration measurements, the uncertainties associated with the meteorological data, and the limitations of the models. The best fit also suggest a smaller mean diameter and a smaller geometric standard deviation on the particle size as well as a slightly weaker particle to cloud coupling than previously reported.
NASA Astrophysics Data System (ADS)
Dai, Shengyun; Pan, Xiaoning; Ma, Lijuan; Huang, Xingguo; Du, Chenzhao; Qiao, Yanjiang; Wu, Zhisheng
2018-05-01
Particle size is of great importance for the quantitative model of the NIR diffuse reflectance. In this paper, the effect of sample particle size on the measurement of harpagoside in Radix Scrophulariae powder by near infrared diffuse (NIR) reflectance spectroscopy was explored. High-performance liquid chromatography (HPLC) was employed as a reference method to construct the quantitative particle size model. Several spectral preprocessing methods were compared, and particle size models obtained by different preprocessing methods for establishing the partial least-squares (PLS) models of harpagoside. Data showed that the particle size distribution of 125-150 μm for Radix Scrophulariae exhibited the best prediction ability with R2pre=0.9513, RMSEP=0.1029 mg·g-1, and RPD = 4.78. For the hybrid granularity calibration model, the particle size distribution of 90-180 μm exhibited the best prediction ability with R2pre=0.8919, RMSEP=0.1632 mg·g-1, and RPD = 3.09. Furthermore, the Kubelka-Munk theory was used to relate the absorption coefficient k (concentration-dependent) and scatter coefficient s (particle size-dependent). The scatter coefficient s was calculated based on the Kubelka-Munk theory to study the changes of s after being mathematically preprocessed. A linear relationship was observed between k/s and absorption A within a certain range and the value for k/s was greater than 4. According to this relationship, the model was more accurately constructed with the particle size distribution of 90-180 μm when s was kept constant or in a small linear region. This region provided a good reference for the linear modeling of diffuse reflectance spectroscopy. To establish a diffuse reflectance NIR model, further accurate assessment should be obtained in advance for a precise linear model.
The charged particle accelerators subsystems modeling
NASA Astrophysics Data System (ADS)
Averyanov, G. P.; Kobylyatskiy, A. V.
2017-01-01
Presented web-based resource for information support the engineering, science and education in Electrophysics, containing web-based tools for simulation subsystems charged particle accelerators. Formulated the development motivation of Web-Environment for Virtual Electrophysical Laboratories. Analyzes the trends of designs the dynamic web-environments for supporting of scientific research and E-learning, within the framework of Open Education concept.
The f1me particulate matter (PM) emissions from nine commercial aircraft engine models were determined by plume sampling during the three field campaigns of the Aircraft Particle Emissions Experiment (APEX). Ground-based measurements were made primarily at 30 m behind the engine ...
NASA Technical Reports Server (NTRS)
Bulzan, Daniel L.
1988-01-01
A theoretical and experimental investigation of particle-laden, weakly swirling, turbulent free jets was conducted. Glass particles, having a Sauter mean diameter of 39 microns, with a standard deviation of 15 microns, were used. A single loading ratio (the mass flow rate of particles per unit mass flow rate of air) of 0.2 was used in the experiments. Measurements are reported for three swirl numbers, ranging from 0 to 0.33. The measurements included mean and fluctuating velocities of both phases, and particle mass flux distributions. Measurements were also completed for single-phase non-swirling and swirling jets, as baselines. Measurements were compared with predictions from three types of multiphase flow analysis, as follows: (1) locally homogeneous flow (LHF) where slip between the phases was neglected; (2) deterministic separated flow (DSF), where slip was considered but effects of turbulence/particle interactions were neglected; and (3) stochastic separated flow (SSF), where effects of both interphase slip and turbulence/particle interactions were considered using random sampling for turbulence properties in conjunction with random-walk computations for particle motion. Single-phase weakly swirling jets were considered first. Predictions using a standard k-epsilon turbulence model, as well as two versions modified to account for effects of streamline curvature, were compared with measurements. Predictions using a streamline curvature modification based on the flux Richardson number gave better agreement with measurements for the single-phase swirling jets than the standard k-epsilon model. For the particle-laden jets, the LHF and DSF models did not provide very satisfactory predictions. The LHF model generally overestimated the rate of decay of particle mean axial and angular velocities with streamwise distance, and predicted particle mass fluxes also showed poor agreement with measurements, due to the assumption of no-slip between phases. The DSF model also performed quite poorly for predictions of particle mass flux because turbulent dispersion of the particles was neglected. The SSF model, which accounts for both particle inertia and turbulent dispersion of the particles, yielded reasonably good predictions throughout the flow field for the particle-laden jets.
Sato, Tatsuhiko; Furusawa, Yoshiya
2012-10-01
Estimation of the survival fractions of cells irradiated with various particles over a wide linear energy transfer (LET) range is of great importance in the treatment planning of charged-particle therapy. Two computational models were developed for estimating survival fractions based on the concept of the microdosimetric kinetic model. They were designated as the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models. The former model takes into account the stochastic natures of both domain and cell nucleus specific energies, whereas the latter model represents the stochastic nature of domain specific energy by its approximated mean value and variance to reduce the computational time. The probability densities of the domain and cell nucleus specific energies are the fundamental quantities for expressing survival fractions in these models. These densities are calculated using the microdosimetric and LET-estimator functions implemented in the Particle and Heavy Ion Transport code System (PHITS) in combination with the convolution or database method. Both the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models can reproduce the measured survival fractions for high-LET and high-dose irradiations, whereas a previously proposed microdosimetric kinetic model predicts lower values for these fractions, mainly due to intrinsic ignorance of the stochastic nature of cell nucleus specific energies in the calculation. The models we developed should contribute to a better understanding of the mechanism of cell inactivation, as well as improve the accuracy of treatment planning of charged-particle therapy.
Trouble with diffusion: Reassessing hillslope erosion laws with a particle-based model
NASA Astrophysics Data System (ADS)
Tucker, Gregory E.; Bradley, D. Nathan
2010-03-01
Many geomorphic systems involve a broad distribution of grain motion length scales, ranging from a few particle diameters to the length of an entire hillslope or stream. Studies of analogous physical systems have revealed that such broad motion distributions can have a significant impact on macroscale dynamics and can violate the assumptions behind standard, local gradient flux laws. Here, a simple particle-based model of sediment transport on a hillslope is used to study the relationship between grain motion statistics and macroscopic landform evolution. Surface grains are dislodged by random disturbance events with probabilities and distances that depend on local microtopography. Despite its simplicity, the particle model reproduces a surprisingly broad range of slope forms, including asymmetric degrading scarps and cinder cone profiles. At low slope angles the dynamics are diffusion like, with a short-range, thin-tailed hop length distribution, a parabolic, convex upward equilibrium slope form, and a linear relationship between transport rate and gradient. As slope angle steepens, the characteristic grain motion length scale begins to approach the length of the slope, leading to planar equilibrium forms that show a strongly nonlinear correlation between transport rate and gradient. These high-probability, long-distance motions violate the locality assumption embedded in many common gradient-based geomorphic transport laws. The example of a degrading scarp illustrates the potential for grain motion dynamics to vary in space and time as topography evolves. This characteristic renders models based on independent, stationary statistics inapplicable. An accompanying analytical framework based on treating grain motion as a survival process is briefly outlined.
Stochastic analysis of particle movement over a dune bed
Lee, Baum K.; Jobson, Harvey E.
1977-01-01
Stochastic models are available that can be used to predict the transport and dispersion of bed-material sediment particles in an alluvial channel. These models are based on the proposition that the movement of a single bed-material sediment particle consists of a series of steps of random length separated by rest periods of random duration and, therefore, application of the models requires a knowledge of the probability distributions of the step lengths, the rest periods, the elevation of particle deposition, and the elevation of particle erosion. The procedure was tested by determining distributions from bed profiles formed in a large laboratory flume with a coarse sand as the bed material. The elevation of particle deposition and the elevation of particle erosion can be considered to be identically distributed, and their distribution can be described by either a ' truncated Gaussian ' or a ' triangular ' density function. The conditional probability distribution of the rest period given the elevation of particle deposition closely followed the two-parameter gamma distribution. The conditional probability distribution of the step length given the elevation of particle erosion and the elevation of particle deposition also closely followed the two-parameter gamma density function. For a given flow, the scale and shape parameters describing the gamma probability distributions can be expressed as functions of bed-elevation. (Woodard-USGS)
Low-order modeling of internal heat transfer in biomass particle pyrolysis
Wiggins, Gavin M.; Daw, C. Stuart; Ciesielski, Peter N.
2016-05-11
We present a computationally efficient, one-dimensional simulation methodology for biomass particle heating under conditions typical of fast pyrolysis. Our methodology is based on identifying the rate limiting geometric and structural factors for conductive heat transport in biomass particle models with realistic morphology to develop low-order approximations that behave appropriately. Comparisons of transient temperature trends predicted by our one-dimensional method with three-dimensional simulations of woody biomass particles reveal good agreement, if the appropriate equivalent spherical diameter and bulk thermal properties are used. Here, we conclude that, for particle sizes and heating regimes typical of fast pyrolysis, it is possible to simulatemore » biomass particle heating with reasonable accuracy and minimal computational overhead, even when variable size, aspherical shape, anisotropic conductivity, and complex, species-specific internal pore geometry are incorporated.« less
Low-Order Modeling of Internal Heat Transfer in Biomass Particle Pyrolysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiggins, Gavin M.; Ciesielski, Peter N.; Daw, C. Stuart
2016-06-16
We present a computationally efficient, one-dimensional simulation methodology for biomass particle heating under conditions typical of fast pyrolysis. Our methodology is based on identifying the rate limiting geometric and structural factors for conductive heat transport in biomass particle models with realistic morphology to develop low-order approximations that behave appropriately. Comparisons of transient temperature trends predicted by our one-dimensional method with three-dimensional simulations of woody biomass particles reveal good agreement, if the appropriate equivalent spherical diameter and bulk thermal properties are used. We conclude that, for particle sizes and heating regimes typical of fast pyrolysis, it is possible to simulate biomassmore » particle heating with reasonable accuracy and minimal computational overhead, even when variable size, aspherical shape, anisotropic conductivity, and complex, species-specific internal pore geometry are incorporated.« less
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)
2002-01-01
Based on the single-scattering optical properties pre-computed with an improved geometric optics method, the bulk absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the effective particle size of a mixture of ice habits, the ice water amount, and spectral band. The parameterization has been applied to computing fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. It is found that flux calculations are not overly sensitive to the assumed particle habits if the definition of the effective particle size is consistent with the particle habits that the parameterization is based. Otherwise, the error in the flux calculations could reach a magnitude unacceptable for climate studies. Different from many previous studies, the parameterization requires only an effective particle size representing all ice habits in a cloud layer, but not the effective size of individual ice habits.
Phase space effects on fast ion distribution function modeling in tokamaks
Podesta, M.; Gorelenkova, M.; Fredrickson, E. D.; ...
2016-04-14
Here, integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities,ad-hocmodels can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. Themore » kick model implemented in the tokamaktransport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.« less
NASA Astrophysics Data System (ADS)
Tucker, G. E.; Bradley, D. N.
2008-12-01
Many geomorphic transport laws assume that the transport process is local, meaning that the space and time scales of particle displacement are short relative to those of the system as a whole. This assumption allows one to express sediment flux in terms of at-a-point properties such as the local surface gradient. However, while this assumption is quite reasonable for some processes (for example, grain displacement by raindrop impact), it is questionable for others (such as landsliding). Moreover, particle displacement distance may also depend on slope angle, becoming longer as gradient increases. For example, the average motion distance during sediment ravel events on very steep slopes may approach the length of the entire hillslope. In such cases, the mass flux through a given point may depend not only on the local topography but also on topography some distance upslope, thus violating the locality assumption. Here we use a stochastic, particle- based model of hillslope evolution to gain insight into the potential for, and consequences of, nonlocality in sediment transport. The model is designed as a simple analogy for a host of different processes that displace sediment grains on hillslopes. The hillslope is represented as a two-dimensional pile of particles. These particles undergo quasi-random motion according to the following rules: (1) during each iteration, a particle and a direction are selected at random; (2) the particle hops in the direction of motion with a probability that depends on the its height relative to that of its immediate neighbor; (3) the particle continues making hops in the same direction and with the same probability dependence, until coming to rest or exiting the base of the slope. The topography and motion statistics that emerge from these rules show a range of behavior that depends on a dimensionless relief parameter. At low relief, hillslope shape is parabolic, mean displacement length is on the order of two particle widths, and the probability distribution of displacement length is thin- tailed (approximately exponential). At high relief, hillslopes become planar, average displacement length increases by an order of magnitude, and the displacement-length distribution becomes heavy-tailed (albeit truncated at the slope length). Across the spectrum of relief values, the relationship between mean flux and gradient resembles the family of nonlinear flux-gradient curves that has been used to model hillslope evolution. We compare the emergent morphology and transport statistics with linear, nonlinear, and fractional diffusion models of hillslope transport.
Physico-Chemical Dynamics of Nanoparticle Formation during Laser Decontamination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, M.D.
2005-06-01
Laser-ablation based decontamination is a new and effective approach for simultaneous removal and characterization of contaminants from surfaces (e.g., building interior and exterior walls, ground floors, etc.). The scientific objectives of this research are to: (1) characterize particulate matter generated during the laser-ablation based decontamination, (2) develop a technique for simultaneous cleaning and spectroscopic verification, and (3) develop an empirical model for predicting particle generation for the size range from 10 nm to tens of micrometers. This research project provides fundamental data obtained through a systematic study on the particle generation mechanism, and also provides a working model for predictionmore » of particle generation such that an effective operational strategy can be devised to facilitate worker protection.« less
Optical levitation particle delivery system for a dual beam fiber optic trap.
Gauthier, R C; Frangioudakis, A
2000-01-01
We combine a radiation-pressure-based levitation system with a dual fiber, laser trapping system to demonstrate the potential of delivering single particles into the fiber trap. The forces versus position and the trajectory of the particle subjected to the laser beams are examined with an enhanced ray optics model. A sequence of video images taken from the experimental apparatus demonstrates the principle of particle delivery, trapping, and further manipulation.
Controlling mixing and segregation in time periodic granular flows
NASA Astrophysics Data System (ADS)
Bhattacharya, Tathagata
Segregation is a major problem for many solids processing industries. Differences in particle size or density can lead to flow-induced segregation. In the present work, we employ the discrete element method (DEM)---one type of particle dynamics (PD) technique---to investigate the mixing and segregation of granular material in some prototypical solid handling devices, such as a rotating drum and chute. In DEM, one calculates the trajectories of individual particles based on Newton's laws of motion by employing suitable contact force models and a collision detection algorithm. Recently, it has been suggested that segregation in particle mixers can be thwarted if the particle flow is inverted at a rate above a critical forcing frequency. Further, it has been hypothesized that, for a rotating drum, the effectiveness of this technique can be linked to the probability distribution of the number of times a particle passes through the flowing layer per rotation of the drum. In the first portion of this work, various configurations of solid mixers are numerically and experimentally studied to investigate the conditions for improved mixing in light of these hypotheses. Besides rotating drums, many studies of granular flow have focused on gravity driven chute flows owing to its practical importance in granular transportation and to the fact that the relative simplicity of this type of flow allows for development and testing of new theories. In this part of the work, we observe the deposition behavior of both mono-sized and polydisperse dry granular materials in an inclined chute flow. The effects of different parameters such as chute angle, particle size, falling height and charge amount on the mass fraction distribution of granular materials after deposition are investigated. The simulation results obtained using DEM are compared with the experimental findings and a high degree of agreement is observed. Tuning of the underlying contact force parameters allows the achievement of realistic results and is used as a means of validating the model against available experimental data. The tuned model is then used to find the critical chute length for segregation based on the hypothesis that segregation can be thwarted if the particle flow is inverted at a rate above a critical forcing frequency. The critical frequency, fcrit, is inversely proportional to the characteristic time of segregation, ts. Mixing is observed instead of segregation when the chute length L < U avgts, where Uavg denotes the average stream-wise flow velocity of the particles. While segregation is often an undesired effect, sometimes separating the components of a particle mixture is the ultimate goal. Rate-based separation processes hold promise as both more environmentally benign as well as less energy intensive when compared to conventional particle separations technologies such as vibrating screens or flotation methods. This approach is based on differences in the kinetic properties of the components of a mixture, such as the velocity of migration or diffusivity. In this portion of the work, two examples of novel rate-based separation devices are demonstrated. The first example involves the study of the dynamics of gravity-driven particles through an array of obstacles. Both discrete element (DEM) simulations and experiments are used to augment the understanding of this device. Dissipative collisions (both between the particles themselves and with the obstacles) give rise to a diffusive motion of particles perpendicular to the flow direction and the differences in diffusion lengths are exploited to separate the particles. The second example employs DEM to analyze a ratchet mechanism where a current of particles can be produced in a direction perpendicular to the energy input. In this setup, a vibrating saw-toothed base is employed to induce different mobility for different types of particles. The effect of operating conditions and design parameters on the separation efficiency are discussed. Keywords: granular flow, particle, mixing, segregation, discrete element method, particle dynamics, tumbler, chute, periodic flow inversion, collisional flow, rate-based separation, ratchet, static separator, dissipative particle dynamics, non-spherical droplet.
AMITIS: A 3D GPU-Based Hybrid-PIC Model for Space and Plasma Physics
NASA Astrophysics Data System (ADS)
Fatemi, Shahab; Poppe, Andrew R.; Delory, Gregory T.; Farrell, William M.
2017-05-01
We have developed, for the first time, an advanced modeling infrastructure in space simulations (AMITIS) with an embedded three-dimensional self-consistent grid-based hybrid model of plasma (kinetic ions and fluid electrons) that runs entirely on graphics processing units (GPUs). The model uses NVIDIA GPUs and their associated parallel computing platform, CUDA, developed for general purpose processing on GPUs. The model uses a single CPU-GPU pair, where the CPU transfers data between the system and GPU memory, executes CUDA kernels, and writes simulation outputs on the disk. All computations, including moving particles, calculating macroscopic properties of particles on a grid, and solving hybrid model equations are processed on a single GPU. We explain various computing kernels within AMITIS and compare their performance with an already existing well-tested hybrid model of plasma that runs in parallel using multi-CPU platforms. We show that AMITIS runs ∼10 times faster than the parallel CPU-based hybrid model. We also introduce an implicit solver for computation of Faraday’s Equation, resulting in an explicit-implicit scheme for the hybrid model equation. We show that the proposed scheme is stable and accurate. We examine the AMITIS energy conservation and show that the energy is conserved with an error < 0.2% after 500,000 timesteps, even when a very low number of particles per cell is used.
Mathematical modeling of HIV-like particle assembly in vitro.
Liu, Yuewu; Zou, Xiufen
2017-06-01
In vitro, the recombinant HIV-1 Gag protein can generate spherical particles with a diameter of 25-30 nm in a fully defined system. It has approximately 80 building blocks, and its intermediates for assembly are abundant in geometry. Accordingly, there are a large number of nonlinear equations in the classical model. Therefore, it is difficult to compute values of geometry parameters for intermediates and make the mathematical analysis using the model. In this work, we develop a new model of HIV-like particle assembly in vitro by using six-fold symmetry of HIV-like particle assembly to decrease the number of geometry parameters. This method will greatly reduce computational costs and facilitate the application of the model. Then, we prove the existence and uniqueness of the positive equilibrium solution for this model with 79 nonlinear equations. Based on this model, we derive the interesting result that concentrations of all intermediates at equilibrium are independent of three important parameters, including two microscopic on-rate constants and the size of nucleating structure. Before equilibrium, these three parameters influence the concentration variation rates of all intermediates. We also analyze the relationship between the initial concentration of building blocks and concentrations of all intermediates. Furthermore, the bounds of concentrations of free building blocks and HIV-like particles are estimated. These results will be helpful to guide HIV-like particle assembly experiments and improve our understanding of the assembly dynamics of HIV-like particles in vitro. Copyright © 2017 Elsevier Inc. All rights reserved.
Simulation of solid-liquid flows in a stirred bead mill based on computational fluid dynamics (CFD)
NASA Astrophysics Data System (ADS)
Winardi, S.; Widiyastuti, W.; Septiani, E. L.; Nurtono, T.
2018-05-01
The selection of simulation model is an important step in computational fluid dynamics (CFD) to obtain an agreement with experimental work. In addition, computational time and processor speed also influence the performance of the simulation results. Here, we report the simulation of solid-liquid flow in a bead mill using Eulerian model. Multiple Reference Frame (MRF) was also used to model the interaction between moving (shaft and disk) and stationary (chamber exclude shaft and disk) zones. Bead mill dimension was based on the experimental work of Yamada and Sakai (2013). The effect of shaft rotation speed of 1200 and 1800 rpm on the particle distribution and the flow field was discussed. For rotation speed of 1200 rpm, the particles spread evenly throughout the bead mill chamber. On the other hand, for the rotation speed of 1800 rpm, the particles tend to be thrown to the near wall region resulting in the dead zone and found no particle in the center region. The selected model agreed well to the experimental data with average discrepancies less than 10%. Furthermore, the simulation was run without excessive computational cost.
Analysis of Gas-Particle Flows through Multi-Scale Simulations
NASA Astrophysics Data System (ADS)
Gu, Yile
Multi-scale structures are inherent in gas-solid flows, which render the modeling efforts challenging. On one hand, detailed simulations where the fine structures are resolved and particle properties can be directly specified can account for complex flow behaviors, but they are too computationally expensive to apply for larger systems. On the other hand, coarse-grained simulations demand much less computations but they necessitate constitutive models which are often not readily available for given particle properties. The present study focuses on addressing this issue, as it seeks to provide a general framework through which one can obtain the required constitutive models from detailed simulations. To demonstrate the viability of this general framework in which closures can be proposed for different particle properties, we focus on the van der Waals force of interaction between particles. We start with Computational Fluid Dynamics (CFD) - Discrete Element Method (DEM) simulations where the fine structures are resolved and van der Waals force between particles can be directly specified, and obtain closures for stress and drag that are required for coarse-grained simulations. Specifically, we develop a new cohesion model that appropriately accounts for van der Waals force between particles to be used for CFD-DEM simulations. We then validate this cohesion model and the CFD-DEM approach by showing that it can qualitatively capture experimental results where the addition of small particles to gas fluidization reduces bubble sizes. Based on the DEM and CFD-DEM simulation results, we propose stress models that account for the van der Waals force between particles. Finally, we apply machine learning, specifically neural networks, to obtain a drag model that captures the effects from fine structures and inter-particle cohesion. We show that this novel approach using neural networks, which can be readily applied for other closures other than drag here, can take advantage of the large amount of data generated from simulations, and therefore offer superior modeling performance over traditional approaches.
Hahn, Melinda W; O'Meliae, Charles R
2004-01-01
The deposition and reentrainment of particles in porous media have been examined theoretically and experimentally. A Brownian Dynamics/Monte Carlo (MC/BD) model has been developed that simulates the movement of Brownian particles near a collector under "unfavorable" chemical conditions and allows deposition in primary and secondary minima. A simple Maxwell approach has been used to estimate particle attachment efficiency by assuming deposition in the secondary minimum and calculating the probability of reentrainment. The MC/BD simulations and the Maxwell calculations support an alternative view of the deposition and reentrainment of Brownian particles under unfavorable chemical conditions. These calculations indicate that deposition into and subsequent release from secondary minima can explain reported discrepancies between classic model predictions that assume irreversible deposition in a primary well and experimentally determined deposition efficiencies that are orders of magnitude larger than Interaction Force Boundary Layer (IFBL) predictions. The commonly used IFBL model, for example, is based on the notion of transport over an energy barrier into the primary well and does not address contributions of secondary minimum deposition. A simple Maxwell model based on deposition into and reentrainment from secondary minima is much more accurate in predicting deposition rates for column experiments at low ionic strengths. It also greatly reduces the substantial particle size effects inherent in IFBL models, wherein particle attachment rates are predicted to decrease significantly with increasing particle size. This view is consistent with recent work by others addressing the composition and structure of the first few nanometers at solid-water interfaces including research on modeling water at solid-liquid interfaces, surface speciation, interfacial force measurements, and the rheological properties of concentrated suspensions. It follows that deposition under these conditions will depend on the depth of the secondary minimum and that some transition between secondary and primary depositions should occur when the height of the energy barrier is on the order of several kT. When deposition in secondary minima predominates, observed deposition should increase with increasing ionic strength, particle size, and Hamaker constant. Since an equilibrium can develop between bound and bulk particles, the collision efficiency [alpha] can no longer be considered a constant for a given physical and chemical system. Rather, in many cases it can decrease over time until it eventually reaches zero as equilibrium is established.
Parameter Uncertainty Analysis Using Monte Carlo Simulations for a Regional-Scale Groundwater Model
NASA Astrophysics Data System (ADS)
Zhang, Y.; Pohlmann, K.
2016-12-01
Regional-scale grid-based groundwater models for flow and transport often contain multiple types of parameters that can intensify the challenge of parameter uncertainty analysis. We propose a Monte Carlo approach to systematically quantify the influence of various types of model parameters on groundwater flux and contaminant travel times. The Monte Carlo simulations were conducted based on the steady-state conversion of the original transient model, which was then combined with the PEST sensitivity analysis tool SENSAN and particle tracking software MODPATH. Results identified hydrogeologic units whose hydraulic conductivity can significantly affect groundwater flux, and thirteen out of 173 model parameters that can cause large variation in travel times for contaminant particles originating from given source zones.
NASA Astrophysics Data System (ADS)
Kishcha, P.; Alpert, P.; Shtivelman, A.; Krichak, S. O.; Joseph, J. H.; Kallos, G.; Katsafados, P.; Spyrou, C.; Gobbi, G. P.; Barnaba, F.; Nickovic, S.; PéRez, C.; Baldasano, J. M.
2007-08-01
In this study, forecast errors in dust vertical distributions were analyzed. This was carried out by using quantitative comparisons between dust vertical profiles retrieved from lidar measurements over Rome, Italy, performed from 2001 to 2003, and those predicted by models. Three models were used: the four-particle-size Dust Regional Atmospheric Model (DREAM), the older one-particle-size version of the SKIRON model from the University of Athens (UOA), and the pre-2006 one-particle-size Tel Aviv University (TAU) model. SKIRON and DREAM are initialized on a daily basis using the dust concentration from the previous forecast cycle, while the TAU model initialization is based on the Total Ozone Mapping Spectrometer aerosol index (TOMS AI). The quantitative comparison shows that (1) the use of four-particle-size bins in the dust modeling instead of only one-particle-size bins improves dust forecasts; (2) cloud presence could contribute to noticeable dust forecast errors in SKIRON and DREAM; and (3) as far as the TAU model is concerned, its forecast errors were mainly caused by technical problems with TOMS measurements from the Earth Probe satellite. As a result, dust forecast errors in the TAU model could be significant even under cloudless conditions. The DREAM versus lidar quantitative comparisons at different altitudes show that the model predictions are more accurate in the middle part of dust layers than in the top and bottom parts of dust layers.
Vector-based model of elastic bonds for simulation of granular solids.
Kuzkin, Vitaly A; Asonov, Igor E
2012-11-01
A model (further referred to as the V model) for the simulation of granular solids, such as rocks, ceramics, concrete, nanocomposites, and agglomerates, composed of bonded particles (rigid bodies), is proposed. It is assumed that the bonds, usually representing some additional gluelike material connecting particles, cause both forces and torques acting on the particles. Vectors rigidly connected with the particles are used to describe the deformation of a single bond. The expression for potential energy of the bond and corresponding expressions for forces and torques are derived. Formulas connecting parameters of the model with longitudinal, shear, bending, and torsional stiffnesses of the bond are obtained. It is shown that the model makes it possible to describe any values of the bond stiffnesses exactly; that is, the model is applicable for the bonds with arbitrary length/thickness ratio. Two different calibration procedures depending on bond length/thickness ratio are proposed. It is shown that parameters of the model can be chosen so that under small deformations the bond is equivalent to either a Bernoulli-Euler beam or a Timoshenko beam or short cylinder connecting particles. Simple analytical expressions, relating parameters of the V model with geometrical and mechanical characteristics of the bond, are derived. Two simple examples of computer simulation of thin granular structures using the V model are given.
Koivisto, Antti J; Jensen, Alexander C Ø; Kling, Kirsten I; Kling, Jens; Budtz, Hans Christian; Koponen, Ismo K; Tuinman, Ilse; Hussein, Tareq; Jensen, Keld A; Nørgaard, Asger; Levin, Marcus
2018-01-05
Here, we studied the particle release rate during Electrostatic spray deposition of anatase-(TiO 2 )-based photoactive coating onto tiles and wallpaper using a commercially available electrostatic spray device. Spraying was performed in a 20.3m 3 test chamber while measuring concentrations of 5.6nm to 31μm-size particles and volatile organic compounds (VOC), as well as particle deposition onto room surfaces and on the spray gun user hand. The particle emission and deposition rates were quantified using aerosol mass balance modelling. The geometric mean particle number emission rate was 1.9×10 10 s -1 and the mean mass emission rate was 381μgs -1 . The respirable mass emission-rate was 65% lower than observed for the entire measured size-range. The mass emission rates were linearly scalable (±ca. 20%) to the process duration. The particle deposition rates were up to 15h -1 for <1μm-size and the deposited particles consisted of mainly TiO 2 , TiO 2 mixed with Cl and/or Ag, TiO 2 particles coated with carbon, and Ag particles with size ranging from 60nm to ca. 5μm. As expected, no significant VOC emissions were observed as a result of spraying. Finally, we provide recommendations for exposure model parameterization. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Matichuk, R. I.; Smith, J. A.; Toon, O. B.; Colarso, P. R.
2006-01-01
Annually, farmers in southern Africa manage their land resources and prepare their fields for cultivation by burning crop residual debris, with a peak in the burning season occurring during August and September. The emissions from these fires in southern Africa are among the greatest from fires worldwide, and the gases and aerosol particles produced adversely affect air quality large distances from their source regions, and can even be tracked in satellite imagery as they cross the Atlantic and Pacific Ocean basins. During August and September 2000 an international group of researchers participating in the Southern African Regional Science Initiate field experiment (SAFARI 2000) made extensive ground-based, airborne, and satellite measurements of these gases and aerosols in order to quantify their amounts and effects on Earth's atmosphere. In this study we interpreted the measurements of smoke aerosol particles made during SAFARI 2000 in order to better represent these particles in a numerical model simulating their transport and fate. Typically, smoke aerosols emitted from fires are concentrated by mass in particles about 0.3 micrometers in diameter (1,000,000 micrometers = 1 meter, about 3 feet); for comparison, the thickness of a human hair is about 50 micrometers, almost 200 times as great. Because of the size of these particles, at the surface they can be easily inhaled into the lungs, and in high concentrations have deleterious health effects on humans. Additionally, these particles reflect and absorb sunlight, impacting both visibility and the balance of sunlight reaching -Earth's surface, and ultimately play a role in modulating Earth's climate. Because of these important effects, it is important that numerical models used to estimate Earth's climate response to changes in atmospheric composition accurately represent the quantity and evolution of smoke particles. In our model, called the Community Aerosol and Radiation Model for Atmospheres (CARMA) we used estimates of smoke emissions based on field studies and observations made with the NASA Terra and TRMM satellites. The meteorology used to calculate the transport was based on an assimilation of observed meteorological conditions provided by the National Center for Atmospheric Research.
Sherman, H; Nguyen, A V; Bruckard, W
2016-11-22
Atomic force microscopy makes it possible to measure the interacting forces between individual colloidal particles and air bubbles, which can provide a measure of the particle hydrophobicity. To indicate the level of hydrophobicity of the particle, the contact angle can be calculated, assuming that no interfacial deformation occurs with the bubble retaining a spherical profile. Our experimental results obtained using a modified sphere tensiometry apparatus to detach submillimeter spherical particles show that deformation of the bubble interface does occur during particle detachment. We also develop a theoretical model to describe the equilibrium shape of the bubble meniscus at any given particle position, based on the minimization of the free energy of the system. The developed model allows us to analyze high-speed video captured during detachment. In the system model deformation of the bubble profile is accounted for by the incorporation of a Lagrange multiplier into both the Young-Laplace equation and the force balance. The solution of the bubble profile matched to the high-speed video allows us to accurately calculate the contact angle and determine the total force balance as a function of the contact point of the bubble on the particle surface.
NASA Astrophysics Data System (ADS)
Ancey, Christophe; Bohorquez, Patricio; Heyman, Joris
2016-04-01
The advection-diffusion equation arises quite often in the context of sediment transport, e.g., for describing time and space variations in the particle activity (the solid volume of particles in motion per unit streambed area). Stochastic models can also be used to derive this equation, with the significant advantage that they provide information on the statistical properties of particle activity. Stochastic models are quite useful when sediment transport exhibits large fluctuations (typically at low transport rates), making the measurement of mean values difficult. We develop an approach based on birth-death Markov processes, which involves monitoring the evolution of the number of particles moving within an array of cells of finite length. While the topic has been explored in detail for diffusion-reaction systems, the treatment of advection has received little attention. We show that particle advection produces nonlocal effects, which are more or less significant depending on the cell size and particle velocity. Albeit nonlocal, these effects look like (local) diffusion and add to the intrinsic particle diffusion (dispersal due to velocity fluctuations), with the important consequence that local measurements depend on both the intrinsic properties of particle displacement and the dimensions of the measurement system.
Microphysical modeling of Titan's detached haze layer in a 3D GCM
NASA Astrophysics Data System (ADS)
Larson, Erik J. L.; Toon, Owen B.; West, Robert A.; Friedson, A. James
2015-07-01
We use a 3D GCM with coupled aerosol microphysics to investigate the formation and seasonal cycle of the detached haze layer in Titan's upper atmosphere. The base of the detached haze layer is defined by a local minimum in the vertical extinction profile. The detached haze is seen at all latitudes including the south pole as seen in Cassini images from 2005-2012. The layer merges into the winter polar haze at high latitudes where the Hadley circulation carries the particles downward. The hemisphere in which the haze merges with the polar haze varies with season. We find that the base of the detached haze layer occurs where there is a near balance between vertical winds and particle fall velocities. Generally the vertical variation of particle concentration in the detached haze region is simply controlled by sedimentation, so the concentration and the extinction vary roughly in proportion to air density. This variation explains why the upper part of the main haze layer, and the bulk of the detached haze layer follow exponential profiles. However, the shape of the profile is modified in regions where the vertical wind velocity is comparable to the particle fall velocity. Our simulations closely match the period when the base of the detached layer in the tropics is observed to begin its seasonal drop in altitude, and the total range of the altitude drop. However, the simulations have the base of the detached layer about 100 km lower than observed, and the time for the base to descend is slower in the simulations than observed. These differences may point to the model having somewhat lower vertical winds than occur on Titan, or somewhat too large of particle sizes, or some combination of both. Our model is consistent with a dynamical origin for the detached haze rather than a chemical or microphysical one. This balance between the vertical wind and particle fall velocities occurs throughout the summer hemisphere and tropics. The particle concentration gradients that are established in the summer hemisphere are transported to the winter hemisphere by meridional winds from the overturning Hadley cell. Our model is consistent with the disappearance of the detached haze layer in early 2014. Our simulations predict the detached haze and gap will reemerge at its original high altitude between mid 2014 and early 2015.
Microphysical Modeling of Titan's Detached Haze Layer in a 3D GCM
NASA Astrophysics Data System (ADS)
Larson, Erik J.; Toon, Owen B.; West, Robert A.; Friedson, A. James
2015-11-01
We investigate the formation and seasonal cycle of the detached haze layer in Titan’s upper atmosphere using a 3D GCM with coupled aerosol microphysics. The base of the detached haze layer is defined by a local minimum in the vertical extinction profile. The detached haze is seen at all latitudes including the south pole as seen in Cassini images from 2005-2012. The layer merges into the winter polar haze at high latitudes where the Hadley circulation carries the particles downward. The hemisphere in which the haze merges with the polar haze varies with season. We find that the base of the detached haze layer occurs where there is a near balance between vertical winds and particle fall velocities. Generally the vertical variation of particle concentration in the detached haze region is simply controlled by sedimentation, so the concentration and the extinction vary roughly in proportion to air density. This variation explains why the upper part of the main haze layer, and the bulk of the detached haze layer follow exponential profiles. However, the shape of the profile is modified in regions where the vertical wind velocity is comparable to the particle fall velocity. Our simulations closely match the period when the base of the detached layer in the tropics is observed to begin its seasonal drop in altitude, and the total range of the altitude drop. However, the simulations have the base of the detached layer about 100 km lower than observed, and the time for the base to descend is slower in the simulations than observed. These differences may point to the model having somewhat lower vertical winds than occur on Titan, or somewhat too large of particle sizes, or some combination of both. Our model is consistent with a dynamical origin for the detached haze rather than a chemical or microphysical one. This balance between the vertical wind and particle fall velocities occurs throughout the summer hemisphere and tropics. The particle concentration gradients that are established in the summer hemisphere are transported to the winter hemisphere by meridional winds from the overturning Hadley cell. Our model is consistent with the disappearance of the detached haze layer in early 2014.
Predicting the particle size distribution of eroded sediment using artificial neural networks.
Lagos-Avid, María Paz; Bonilla, Carlos A
2017-03-01
Water erosion causes soil degradation and nonpoint pollution. Pollutants are primarily transported on the surfaces of fine soil and sediment particles. Several soil loss models and empirical equations have been developed for the size distribution estimation of the sediment leaving the field, including the physically-based models and empirical equations. Usually, physically-based models require a large amount of data, sometimes exceeding the amount of available data in the modeled area. Conversely, empirical equations do not always predict the sediment composition associated with individual events and may require data that are not always available. Therefore, the objective of this study was to develop a model to predict the particle size distribution (PSD) of eroded soil. A total of 41 erosion events from 21 soils were used. These data were compiled from previous studies. Correlation and multiple regression analyses were used to identify the main variables controlling sediment PSD. These variables were the particle size distribution in the soil matrix, the antecedent soil moisture condition, soil erodibility, and hillslope geometry. With these variables, an artificial neural network was calibrated using data from 29 events (r 2 =0.98, 0.97, and 0.86; for sand, silt, and clay in the sediment, respectively) and then validated and tested on 12 events (r 2 =0.74, 0.85, and 0.75; for sand, silt, and clay in the sediment, respectively). The artificial neural network was compared with three empirical models. The network presented better performance in predicting sediment PSD and differentiating rain-runoff events in the same soil. In addition to the quality of the particle distribution estimates, this model requires a small number of easily obtained variables, providing a convenient routine for predicting PSD in eroded sediment in other pollutant transport models. Copyright © 2017 Elsevier B.V. All rights reserved.
Micromechanics-based magneto-elastic constitutive modeling of particulate composites
NASA Astrophysics Data System (ADS)
Yin, Huiming
Modified Green's functions are derived for three situations: a magnetic field caused by a local magnetization, a displacement field caused by a local body force and a displacement field caused by a local prescribed eigenstrain. Based on these functions, an explicit solution is derived for two magnetic particles embedded in the infinite medium under external magnetic and mechanical loading. A general solution for numerable magnetic particles embedded in an infinite domain is then provided in integral form. Two-phase composites containing spherical magnetic particles of the same size are considered for three kinds of microstructures. With chain-structured composites, particle interactions in the same chain are considered and a transversely isotropic effective elasticity is obtained. For periodic composites, an eight-particle interaction model is developed and provides a cubic symmetric effective elasticity. In the random composite, pair-wise particle interactions are integrated from all possible positions and an isotropic effective property is reached. This method is further extended to functionally graded composites. Magneto-mechanical behavior is studied for the chain-structured composite and the random composite. Effective magnetic permeability, effective magnetostriction and field-dependent effective elasticity are investigated. It is seen that the chain-structured composite is more sensitive to the magnetic field than the random composite; a composite consisting of only 5% of chain-structured particles can provide a larger magnetostriction and a larger change of effective elasticity than an equivalent composite consisting of 30% of random dispersed particles. Moreover, the effective shear modulus of the chain-structured composite rapidly increases with the magnetic field, while that for the random composite decreases. An effective hyperelastic constitutive model is further developed for a magnetostrictive particle-filled elastomer, which is sampled by using a network of body-centered cubic lattices of particles connected by macromolecular chains. The proposed hyperelastic model is able to characterize overall nonlinear elastic stress-stretch relations of the composites under general three-dimensional loading. It is seen that the effective strain energy density is proportional to the length of stretched chains in unit volume and volume fraction of particles.
A simplified method for assessing particle deposition rate in aircraft cabins
NASA Astrophysics Data System (ADS)
You, Ruoyu; Zhao, Bin
2013-03-01
Particle deposition in aircraft cabins is important for the exposure of passengers to particulate matter, as well as the airborne infectious diseases. In this study, a simplified method is proposed for initial and quick assessment of particle deposition rate in aircraft cabins. The method included: collecting the inclined angle, area, characteristic length, and freestream air velocity for each surface in a cabin; estimating the friction velocity based on the characteristic length and freestream air velocity; modeling the particle deposition velocity using the empirical equation we developed previously; and then calculating the particle deposition rate. The particle deposition rates for the fully-occupied, half-occupied, 1/4-occupied and empty first-class cabin of the MD-82 commercial airliner were estimated. The results show that the occupancy did not significantly influence the particle deposition rate of the cabin. Furthermore, the simplified human model can be used in the assessment with acceptable accuracy. Finally, the comparison results show that the particle deposition rate of aircraft cabins and indoor environments are quite similar.
NASA Astrophysics Data System (ADS)
Geisinger, Armin; Behrendt, Andreas; Wulfmeyer, Volker; Strohbach, Jens; Förstner, Jochen; Potthast, Roland
2017-12-01
A new backscatter lidar forward operator was developed which is based on the distinct calculation of the aerosols' backscatter and extinction properties. The forward operator was adapted to the COSMO-ART ash dispersion simulation of the Eyjafjallajökull eruption in 2010. While the particle number concentration was provided as a model output variable, the scattering properties of each individual particle type were determined by dedicated scattering calculations. Sensitivity studies were performed to estimate the uncertainties related to the assumed particle properties. Scattering calculations for several types of non-spherical particles required the usage of T-matrix routines. Due to the distinct calculation of the backscatter and extinction properties of the models' volcanic ash size classes, the sensitivity studies could be made for each size class individually, which is not the case for forward models based on a fixed lidar ratio. Finally, the forward-modeled lidar profiles have been compared to automated ceilometer lidar (ACL) measurements both qualitatively and quantitatively while the attenuated backscatter coefficient was chosen as a suitable physical quantity. As the ACL measurements were not calibrated automatically, their calibration had to be performed using satellite lidar and ground-based Raman lidar measurements. A slight overestimation of the model-predicted volcanic ash number density was observed. Major requirements for future data assimilation of data from ACL have been identified, namely, the availability of calibrated lidar measurement data, a scattering database for atmospheric aerosols, a better representation and coverage of aerosols by the ash dispersion model, and more investigation in backscatter lidar forward operators which calculate the backscatter coefficient directly for each individual aerosol type. The introduced forward operator offers the flexibility to be adapted to a multitude of model systems and measurement setups.
Absorption and Clearance of Pharmaceutical Aerosols in the Human Nose: Development of a CFD Model.
Rygg, Alex; Longest, P Worth
2016-10-01
The objective of this study was to develop a computational fluid dynamics (CFD) model to predict the deposition, dissolution, clearance, and absorption of pharmaceutical particles in the human nasal cavity. A three-dimensional nasal cavity geometry was converted to a surface-based model, providing an anatomically-accurate domain for the simulations. Particle deposition data from a commercial nasal spray product was mapped onto the surface model, and a mucus velocity field was calculated and validated with in vivo nasal clearance rates. A submodel for the dissolution of deposited particles was developed and validated based on comparisons to existing in vitro data for multiple pharmaceutical products. A parametric study was then performed to assess sensitivity of epithelial drug uptake to model conditions and assumptions. The particle displacement distance (depth) in the mucus layer had a modest effect on overall drug absorption, while the mucociliary clearance rate was found to be primarily responsible for drug uptake over the timescale of nasal clearance for the corticosteroid mometasone furoate (MF). The model revealed that drug deposition in the nasal vestibule (NV) could slowly be transported into the main passage (MP) and then absorbed through connection of the liquid layer in the NV and MP regions. As a result, high intersubject variability in cumulative uptake was predicted, depending on the length of time the NV dose was left undisturbed without blowing or wiping the nose. This study has developed, for the first time, a complete CFD model of nasal aerosol delivery from the point of spray formation through absorption at the respiratory epithelial surface. For the development and assessment of nasal aerosol products, this CFD-based in silico model provides a new option to complement existing in vitro nasal cast studies of deposition and in vivo imaging experiments of clearance.
Bachler, Gerald; von Goetz, Natalie; Hungerbuhler, Konrad
2015-05-01
Nano-sized titanium dioxide particles (nano-TiO2) can be found in a large number of foods and consumer products, such as cosmetics and toothpaste, thus, consumer exposure occurs via multiple sources, possibly involving different exposure routes. In order to determine the disposition of nano-TiO2 particles that are taken up, a physiologically based pharmacokinetic (PBPK) model was developed. High priority was placed on limiting the number of parameters to match the number of underlying data points (hence to avoid overparameterization), but still reflecting available mechanistic information on the toxicokinetics of nano-TiO2. To this end, the biodistribution of nano-TiO2 was modeled based on their ability to cross the capillary wall of the organs and to be phagocytosed in the mononuclear phagocyte system (MPS). The model's predictive power was evaluated by comparing simulated organ levels to experimentally assessed organ levels of independent in vivo studies. The results of our PBPK model indicate that: (1) within the application domain of the PBPK model from 15 to 150 nm, the size and crystalline structure of the particles had a minor influence on the biodistribution; and (2) at high internal exposure the particles agglomerate in vivo and are subsequently taken up by macrophages in the MPS. Furthermore, we also give an example on how the PBPK model may be used for risk assessment. For this purpose, the daily dietary intake of nano-TiO2 was calculated for the German population. The PBPK model was then used to convert this chronic external exposure into internal titanium levels for each organ.
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Siegel, Andrew R.
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.« less
Factors controlling the evaporation of secondary organic aerosol from α‐pinene ozonolysis
Pajunoja, Aki; Tikkanen, Olli‐Pekka; Buchholz, Angela; Faiola, Celia; Väisänen, Olli; Hao, Liqing; Kari, Eetu; Peräkylä, Otso; Garmash, Olga; Shiraiwa, Manabu; Ehn, Mikael; Lehtinen, Kari; Virtanen, Annele
2017-01-01
Abstract Secondary organic aerosols (SOA) forms a major fraction of organic aerosols in the atmosphere. Knowledge of SOA properties that affect their dynamics in the atmosphere is needed for improving climate models. By combining experimental and modeling techniques, we investigated the factors controlling SOA evaporation under different humidity conditions. Our experiments support the conclusion of particle phase diffusivity limiting the evaporation under dry conditions. Viscosity of particles at dry conditions was estimated to increase several orders of magnitude during evaporation, up to 109 Pa s. However, at atmospherically relevant relative humidity and time scales, our results show that diffusion limitations may have a minor effect on evaporation of the studied α‐pinene SOA particles. Based on previous studies and our model simulations, we suggest that, in warm environments dominated by biogenic emissions, the major uncertainty in models describing the SOA particle evaporation is related to the volatility of SOA constituents. PMID:28503004
NASA Astrophysics Data System (ADS)
Watanabe, Tomoaki; Nagata, Koji
2016-11-01
The mixing volume model (MVM), which is a mixing model for molecular diffusion in Lagrangian simulations of turbulent mixing problems, is proposed based on the interactions among spatially distributed particles in a finite volume. The mixing timescale in the MVM is derived by comparison between the model and the subgrid scale scalar variance equation. A-priori test of the MVM is conducted based on the direct numerical simulations of planar jets. The MVM is shown to predict well the mean effects of the molecular diffusion under various conditions. However, a predicted value of the molecular diffusion term is positively correlated to the exact value in the DNS only when the number of the mixing particles is larger than two. Furthermore, the MVM is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (ILES/LPS). The ILES/LPS with the present mixing model predicts well the decay of the scalar variance in planar jets. This work was supported by JSPS KAKENHI Nos. 25289030 and 16K18013. The numerical simulations presented in this manuscript were carried out on the high performance computing system (NEC SX-ACE) in the Japan Agency for Marine-Earth Science and Technology.
A minimally-resolved immersed boundary model for reaction-diffusion problems
NASA Astrophysics Data System (ADS)
Pal Singh Bhalla, Amneet; Griffith, Boyce E.; Patankar, Neelesh A.; Donev, Aleksandar
2013-12-01
We develop an immersed boundary approach to modeling reaction-diffusion processes in dispersions of reactive spherical particles, from the diffusion-limited to the reaction-limited setting. We represent each reactive particle with a minimally-resolved "blob" using many fewer degrees of freedom per particle than standard discretization approaches. More complicated or more highly resolved particle shapes can be built out of a collection of reactive blobs. We demonstrate numerically that the blob model can provide an accurate representation at low to moderate packing densities of the reactive particles, at a cost not much larger than solving a Poisson equation in the same domain. Unlike multipole expansion methods, our method does not require analytically computed Green's functions, but rather, computes regularized discrete Green's functions on the fly by using a standard grid-based discretization of the Poisson equation. This allows for great flexibility in implementing different boundary conditions, coupling to fluid flow or thermal transport, and the inclusion of other effects such as temporal evolution and even nonlinearities. We develop multigrid-based preconditioners for solving the linear systems that arise when using implicit temporal discretizations or studying steady states. In the diffusion-limited case the resulting linear system is a saddle-point problem, the efficient solution of which remains a challenge for suspensions of many particles. We validate our method by comparing to published results on reaction-diffusion in ordered and disordered suspensions of reactive spheres.
Mesoscale Particle-Based Model of Electrophoretic Deposition
Giera, Brian; Zepeda-Ruiz, Luis A.; Pascall, Andrew J.; ...
2016-12-20
In this paper, we present and evaluate a semiempirical particle-based model of electrophoretic deposition using extensive mesoscale simulations. We analyze particle configurations in order to observe how colloids accumulate at the electrode and arrange into deposits. In agreement with existing continuum models, the thickness of the deposit increases linearly in time during deposition. Resulting colloidal deposits exhibit a transition between highly ordered and bulk disordered regions that can give rise to an appreciable density gradient under certain simulated conditions. The overall volume fraction increases and falls within a narrow range as the driving force due to the electric field increasesmore » and repulsive intercolloidal interactions decrease. We postulate ordering and stacking within the initial layer(s) dramatically impacts the microstructure of the deposits. Finally, we find a combination of parameters, i.e., electric field and suspension properties, whose interplay enhances colloidal ordering beyond the commonly known approach of only reducing the driving force.« less
Interplay between collective and single particle excitations around neutron-rich doubly-magic nuclei
NASA Astrophysics Data System (ADS)
Leoni, S.
2016-05-01
The excitation spectra of nuclei with one or two particles outside a doubly-magic core are expected to be dominated, at low energy, by the couplings between phonon excitations of the core and valence particles. A survey of the experimental situation is given for some nuclei lying in close proximity of neutron-rich doubly-magic systems, such as 47,49Ca, 133Sb and 210Bi. Data are obtained with various types of reactions (multinucleon transfer with heavy ions, cold neutron capture and neutron induced fission of 235U and 241Pu targets), with the employment of complex detection systems based on HPGe arrays. A comparison with theoretical calculations is also presented, in terms of large shell model calculations and of a phenomenological particle-phonon model. In the case of 133Sb, a new microscopic "hybrid" model is introduced: it is based on the coupling between core excitations (both collective and non-collective) of the doubly-magic core and the valence nucleon, using the Skyrme effective interaction in a consistent way.
NASA Technical Reports Server (NTRS)
Mann, G. W.; Carslaw, K. S.; Reddington, C. L.; Pringle, K. J.; Schulz, M.; Asmi, A.; Spracklen, D. V.; Ridley, D. A.; Woodhouse, M. T.; Lee, L. A.;
2014-01-01
Many of the next generation of global climate models will include aerosol schemes which explicitly simulate the microphysical processes that determine the particle size distribution. These models enable aerosol optical properties and cloud condensation nuclei (CCN) concentrations to be determined by fundamental aerosol processes, which should lead to a more physically based simulation of aerosol direct and indirect radiative forcings. This study examines the global variation in particle size distribution simulated by 12 global aerosol microphysics models to quantify model diversity and to identify any common biases against observations. Evaluation against size distribution measurements from a new European network of aerosol supersites shows that the mean model agrees quite well with the observations at many sites on the annual mean, but there are some seasonal biases common to many sites. In particular, at many of these European sites, the accumulation mode number concentration is biased low during winter and Aitken mode concentrations tend to be overestimated in winter and underestimated in summer. At high northern latitudes, the models strongly underpredict Aitken and accumulation particle concentrations compared to the measurements, consistent with previous studies that have highlighted the poor performance of global aerosol models in the Arctic. In the marine boundary layer, the models capture the observed meridional variation in the size distribution, which is dominated by the Aitken mode at high latitudes, with an increasing concentration of accumulation particles with decreasing latitude. Considering vertical profiles, the models reproduce the observed peak in total particle concentrations in the upper troposphere due to new particle formation, although modelled peak concentrations tend to be biased high over Europe. Overall, the multimodel- mean data set simulates the global variation of the particle size distribution with a good degree of skill, suggesting that most of the individual global aerosol microphysics models are performing well, although the large model diversity indicates that some models are in poor agreement with the observations. Further work is required to better constrain size-resolved primary and secondary particle number sources, and an improved understanding of nucleation an growth (e.g. the role of nitrate and secondary organics) will improve the fidelity of simulated particle size distributions.
Skeletal dosimetry models for alpha-particles for use in molecular radiotherapy
NASA Astrophysics Data System (ADS)
Watchman, Christopher J.
Molecular radiotherapy is a cancer treatment methodology whereby a radionuclide is combined with a biologically active molecule to preferentially target cancer cells. Alpha-particle emitting radionuclides show significant potential for use in molecular radiotherapy due to the short range of the alpha-particles in tissue and their high rates of energy deposition. Current radiation dosimetry models used to assess alpha emitter dose in the skeleton were developed originally for occupational applications. In medical dosimetry, individual variability in uptake, translocation and other biological factors can result in poor correlation of clinical outcome with marrow dose estimates determined using existing skeletal models. Methods presented in this work were developed in response to the need for dosimetry models which account for these biological and patient-specific factors. Dosimetry models are presented for trabecular bone alpha particle dosimetry as well as a model for cortical bone dosimetry. These radiation transport models are the 3D chord-based infinite spongiosa transport model (3D-CBIST) and the chord-based infinite cortical transport model (CBICT), respectively. Absorbed fraction data for several skeletal tissues for several subjects are presented. Each modeling strategy accounts for biological parameters, such as bone marrow cellularity, not previously incorporated into alpha-particle skeletal dosimetry models used in radiation protection. Using these data a study investigating the variability in alpha-particle absorbed fractions in the human skeleton is also presented. Data is also offered relating skeletal tissue masses in individual bone sites for a range of ages. These data are necessary for dose calculations and have previously only been available as whole body tissue masses. A revised 3D-CBIST model is also presented which allows for changes in endosteum thickness to account for revised target cell location of tissues involved in the radiological induction of bone cancer. In addition, new data are presented on the location of bone-marrow stem cells within the marrow cavities of trabecular bone of the pelvis. All results presented in this work may be applied to occupational exposures, but their greatest utility lies in dose assessments for alpha-emitters in molecular radiotherapy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carruthers, L.M.; Lee, C.E.
1976-10-01
The theoretical and numerical data base development of the LARC-1 code is described. Four analytical models of fission product release from an HTGR core during the loss of forced circulation accident are developed. Effects of diffusion, adsorption and evaporation of the metallics and precursors are neglected in this first LARC model. Comparison of the analytic models indicates that the constant release-renormalized model is adequate to describe the processes involved. The numerical data base for release constants, temperature modeling, fission product release rates, coated fuel particle failure fraction and aged coated fuel particle failure fractions is discussed. Analytic fits and graphicmore » displays for these data are given for the Ft. St. Vrain and GASSAR models.« less
Guo, Jianping; Lou, Mengyun; Miao, Yucong; Wang, Yuan; Zeng, Zhaoliang; Liu, Huan; He, Jing; Xu, Hui; Wang, Fu; Min, Min; Zhai, Panmao
2017-11-01
East Asia is one of the world's largest sources of dust and anthropogenic pollution. Dust particles originating from East Asia have been recognized to travel across the Pacific to North America and beyond, thereby affecting the radiation incident on the surface as well as clouds aloft in the atmosphere. In this study, integrated analyses are performed focusing on one trans-Pacific dust episode during 12-22 March 2015, based on space-borne, ground-based observations, reanalysis data combined with Hybrid Single Particle Lagrangian Integrated Trajectory Model (HYSPLIT), and the Weather Research and Forecasting Model coupled with Chemistry (WRF-Chem). From the perspective of synoptic patterns, the location and strength of Aleutian low pressure system largely determined the eastward transport of dust plumes towards western North America. Multi-sensor satellite observations reveal that dust aerosols in this episode originated from the Taklimakan and Gobi Deserts. Moreover, the satellite observations suggest that the dust particles can be transformed to polluted particles over the East Asian regions after encountering high concentration of anthropogenic pollutants. In terms of the vertical distribution of polluted dust particles, at the very beginning, they were mainly located in the altitudes ranging from 1 km to 7 km over the source region, then ascended to 2 km-9 km over the Pacific Ocean. The simulations confirm that these elevated dust particles in the lower free troposphere were largely transported along the prevailing westerly jet stream. Overall, observations and modeling demonstrate how a typical springtime dust episode develops and how the dust particles travel over the North Pacific Ocean all the way to North America. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Die, Qingqi; Nie, Zhiqiang; Liu, Feng; Tian, Yajun; Fang, Yanyan; Gao, Hefeng; Tian, Shulei; He, Jie; Huang, Qifei
2015-10-01
Gas and particle phase air samples were collected in summer and winter around industrial sites in Shanghai, China, to allow the concentrations, profiles, and gas-particle partitioning of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) and dioxin-like polychlorinated biphenyls (dl-PCBs) to be determined. The total 2,3,7,8-substituted PCDD/F and dl-PCB toxic equivalent (TEQ) concentrations were 14.2-182 fg TEQ/m3 (mean 56.8 fg TEQ/m3) in summer and 21.9-479 fg TEQ/m3 (mean 145 fg TEQ/m3) in winter. The PCDD/Fs tended to be predominantly in the particulate phase, while the dl-PCBs were predominantly found in the gas phase, and the proportions of all of the PCDD/F and dl-PCB congeners in the particle phase increased as the temperature decreased. The logarithms of the gas-particle partition coefficients correlated well with the subcooled liquid vapor pressures of the PCDD/Fs and dl-PCBs for most of the samples. Gas-particle partitioning of the PCDD/Fs deviated from equilibrium either in summer or winter close to local sources, and the Junge-Pankow model and predictions made using a model based on the octanol-air partition coefficient fitted the measured particulate PCDD/F fractions well, indicating that absorption and adsorption mechanism both contributed to the partitioning process. However, gas-particle equilibrium of the dl-PCBs was reached more easily in winter than in summer. The Junge-Pankow model predictions fitted the dl-PCB data better than did the predictions made using the model based on the octanol-air partition coefficient, indicating that adsorption mechanism made dominated contribution to the partitioning process.
NASA Astrophysics Data System (ADS)
Zhang, Yong; Papelis, Charalambos; Sun, Pengtao; Yu, Zhongbo
2013-08-01
Particle-based models and continuum models have been developed to quantify mixing-limited bimolecular reactions for decades. Effective model parameters control reaction kinetics, but the relationship between the particle-based model parameter (such as the interaction radius R) and the continuum model parameter (i.e., the effective rate coefficient Kf) remains obscure. This study attempts to evaluate and link R and Kf for the second-order bimolecular reaction in both the bulk and the sharp-concentration-gradient (SCG) systems. First, in the bulk system, the agent-based method reveals that R remains constant for irreversible reactions and decreases nonlinearly in time for a reversible reaction, while mathematical analysis shows that Kf transitions from an exponential to a power-law function. Qualitative link between R and Kf can then be built for the irreversible reaction with equal initial reactant concentrations. Second, in the SCG system with a reaction interface, numerical experiments show that when R and Kf decline as t-1/2 (for example, to account for the reactant front expansion), the two models capture the transient power-law growth of product mass, and their effective parameters have the same functional form. Finally, revisiting of laboratory experiments further shows that the best fit factor in R and Kf is on the same order, and both models can efficiently describe chemical kinetics observed in the SCG system. Effective model parameters used to describe reaction kinetics therefore may be linked directly, where the exact linkage may depend on the chemical and physical properties of the system.
Brodie, Ian M
2012-01-01
Suspended solids from urban impervious surfaces (SSUIS) is a spreadsheet-based model that predicts the mass loading of suspended solids (SS) in stormwater runoff generated from impervious urban surfaces. The model is intended to be a research tool and incorporates several particle accumulation and washoff processes. Development of SSUIS is based on interpretation of storm event data obtained from a galvanised iron roof, a concrete car park and a bitumen road located in Toowoomba, Australia. SSUIS is a source area model that tracks the particle mass balance on the impervious surface and within its lateral drain to a point of discharge. Particles are separated into two groups: free and detained, depending on the rainfall energy required for surface washoff. Calibration and verification of SSUIS against the Toowoomba SS data yielded R(2) values ranging from 0.60 to 0.98. Parameter sensitivity analysis and an example of how SSUIS can be applied to predict the treatment efficiency of a grass swale are also provided.
A stylistic classification of Russian-language texts based on the random walk model
NASA Astrophysics Data System (ADS)
Kramarenko, A. A.; Nekrasov, K. A.; Filimonov, V. V.; Zhivoderov, A. A.; Amieva, A. A.
2017-09-01
A formal approach to text analysis is suggested that is based on the random walk model. The frequencies and reciprocal positions of the vowel letters are matched up by a process of quasi-particle migration. Statistically significant difference in the migration parameters for the texts of different functional styles is found. Thus, a possibility of classification of texts using the suggested method is demonstrated. Five groups of the texts are singled out that can be distinguished from one another by the parameters of the quasi-particle migration process.
Intra-particle migration of mercury in granular polysulfide-rubber-coated activated carbon (PSR-AC)
Kim, Eun-Ah; Masue-Slowey, Yoko; Fendorf, Scott; Luthy, Richard G.
2011-01-01
The depth profile of mercuric ion after the reaction with polysulfide-rubber-coated activated carbon (PSR-AC) was investigated using micro-x-ray fluorescence (μ-XRF) imaging techniques and mathematical modeling. The μ-XRF results revealed that mercury was concentrated at 0~100 μm from the exterior of the particle after three months of treatment with PSR-AC in 10 ppm HgCl2 aqueous solution. The μ-X-ray absorption near edge spectroscopic (μ-XANES) analyses indicated HgS as a major mercury species, and suggested that the intra-particle mercury transport involved a chemical reaction with PSR polymer. An intra-particle mass transfer model was developed based on either a Langmuir sorption isotherm with liquid phase diffusion (Langmuir model) or a kinetic sorption with surface diffusion (kinetic sorption model). The Langmuir model predicted the general trend of mercury diffusion, although at a slower rate than observed from the μ-XRF map. A kinetic sorption model suggested faster mercury transport, which overestimated the movement of mercuric ions through an exchange reaction between the fast and slow reaction sites. Both μ-XRF and mathematical modeling results suggest mercury removal occurs not only at the outer surface of the PSR-AC particle but also at some interior regions due to a large PSR surface area within an AC particle. PMID:22133913
ERIC Educational Resources Information Center
Wiener, Gerfried J.; Schmeling, Sascha M.; Hopf, Martin
2015-01-01
This study introduces a teaching concept based on the Standard Model of particle physics. It comprises two consecutive chapters--elementary particles and fundamental interactions. The rationale of this concept is that the fundamental principles of particle physics can run as the golden thread through the whole physics curriculum. The design…
Investigating the settling dynamics of cohesive silt particles with particle-resolving simulations
NASA Astrophysics Data System (ADS)
Sun, Rui; Xiao, Heng; Sun, Honglei
2018-01-01
The settling of cohesive sediment is ubiquitous in aquatic environments, and the study of the settling process is important for both engineering and environmental reasons. In the settling process, the silt particles show behaviors that are different from non-cohesive particles due to the influence of inter-particle cohesive force. For instance, the flocs formed in the settling process of cohesive silt can loosen the packing, and thus the structural densities of cohesive silt beds are much smaller than that of non-cohesive sand beds. While there is a consensus that cohesive behaviors depend on the characteristics of sediment particles (e.g., Bond number, particle size distribution), little is known about the exact influence of these characteristics on the cohesive behaviors. In addition, since the cohesive behaviors of the silt are caused by the inter-particle cohesive forces, the motions of and the contacts among silt particles should be resolved to study these cohesive behaviors in the settling process. However, studies of the cohesive behaviors of silt particles in the settling process based on particle-resolving approach are still lacking. In the present work, three-dimensional settling process is investigated numerically by using CFD-DEM (Computational Fluid Dynamics-Discrete Element Method). The inter-particle collision force, the van der Waals force, and the fluid-particle interaction forces are considered. The numerical model is used to simulate the hindered settling process of silt based on the experimental setup in the literature. The results obtained in the simulations, including the structural densities of the beds, the characteristic lines, and the particle terminal velocity, are in good agreement with the experimental observations in the literature. To the authors' knowledge, this is the first time that the influences of non-dimensional Bond number and particle polydispersity on the structural densities of silt beds have been investigated separately. The results demonstrate that the cohesive behavior of silt in the settling process is attributed to both the cohesion among silt particles themselves and the particle polydispersity. To guide to the macro-scale modeling of cohesive silt sedimentation, the collision frequency functions obtained in the numerical simulations are also presented based on the micromechanics of particles. The results obtained by using CFD-DEM indicate that the binary collision theory over-estimated the particle collision frequency in the flocculation process at high solid volume fraction.
The effect of model fidelity on prediction of char burnout for single-particle coal combustion
McConnell, Josh; Sutherland, James C.
2016-07-09
In this study, practical simulation of industrial-scale coal combustion relies on the ability to accurately capture the dynamics of coal subprocesses while also ensuring the computational cost remains reasonable. The majority of the residence time occurs post-devolatilization, so it is of great importance that a balance between the computational efficiency and accuracy of char combustion models is carefully considered. In this work, we consider the importance of model fidelity during char combustion by comparing combinations of simple and complex gas and particle-phase chemistry models. Detailed kinetics based on the GRI 3.0 mechanism and infinitely-fast chemistry are considered in the gas-phase.more » The Char Conversion Kinetics model and nth-Order Langmuir–Hinshelwood model are considered for char consumption. For devolatilization, the Chemical Percolation and Devolatilization and Kobayashi-Sarofim models are employed. The relative importance of gasification versus oxidation reactions in air and oxyfuel environments is also examined for various coal types. Results are compared to previously published experimental data collected under laminar, single-particle conditions. Calculated particle temperature histories are strongly dependent on the choice of gas phase and char chemistry models, but only weakly dependent on the chosen devolatilization model. Particle mass calculations were found to be very sensitive to the choice of devolatilization model, but only somewhat sensitive to the choice of gas chemistry and char chemistry models. High-fidelity models for devolatilization generally resulted in particle temperature and mass calculations that were closer to experimentally observed values.« less
The effect of model fidelity on prediction of char burnout for single-particle coal combustion
DOE Office of Scientific and Technical Information (OSTI.GOV)
McConnell, Josh; Sutherland, James C.
In this study, practical simulation of industrial-scale coal combustion relies on the ability to accurately capture the dynamics of coal subprocesses while also ensuring the computational cost remains reasonable. The majority of the residence time occurs post-devolatilization, so it is of great importance that a balance between the computational efficiency and accuracy of char combustion models is carefully considered. In this work, we consider the importance of model fidelity during char combustion by comparing combinations of simple and complex gas and particle-phase chemistry models. Detailed kinetics based on the GRI 3.0 mechanism and infinitely-fast chemistry are considered in the gas-phase.more » The Char Conversion Kinetics model and nth-Order Langmuir–Hinshelwood model are considered for char consumption. For devolatilization, the Chemical Percolation and Devolatilization and Kobayashi-Sarofim models are employed. The relative importance of gasification versus oxidation reactions in air and oxyfuel environments is also examined for various coal types. Results are compared to previously published experimental data collected under laminar, single-particle conditions. Calculated particle temperature histories are strongly dependent on the choice of gas phase and char chemistry models, but only weakly dependent on the chosen devolatilization model. Particle mass calculations were found to be very sensitive to the choice of devolatilization model, but only somewhat sensitive to the choice of gas chemistry and char chemistry models. High-fidelity models for devolatilization generally resulted in particle temperature and mass calculations that were closer to experimentally observed values.« less
Simulation and analysis of light scattering by multilamellar bodies present in the human eye
Méndez-Aguilar, Emilia M.; Kelly-Pérez, Ismael; Berriel-Valdos, L. R.; Delgado-Atencio, José A.
2017-01-01
A modified computational model of the human eye was used to obtain and compare different probability density functions, radial profiles of light pattern distributions, and images of the point spread function formed in the human retina under the presence of different kinds of particles inside crystalline lenses suffering from cataracts. Specifically, this work uses simple particles without shells and multilamellar bodies (MLBs) with shells. The emergence of such particles alters the formation of images on the retina. Moreover, the MLBs change over time, which affects properties such as the refractive index of their shell. Hence, this work not only simulates the presence of such particles but also evaluates the incidence of particle parameters such as particle diameter, particle thickness, and shell refractive index, which are set based on reported experimental values. In addition, two wavelengths (400 nm and 700 nm) are used for light passing through the different layers of the computational model. The effects of these parameters on light scattering are analyzed using the simulation results. Further, in these results, the effects of light scattering on image formation can be seen when single particles, early-stage MLBs, or mature MLBs are incorporated in the model. Finally, it is found that particle diameter has the greatest impact on image formation. PMID:28663924
Cheng, Wen-Chang
2012-01-01
In this paper we propose a robust lane detection and tracking method by combining particle filters with the particle swarm optimization method. This method mainly uses the particle filters to detect and track the local optimum of the lane model in the input image and then seeks the global optimal solution of the lane model by a particle swarm optimization method. The particle filter can effectively complete lane detection and tracking in complicated or variable lane environments. However, the result obtained is usually a local optimal system status rather than the global optimal system status. Thus, the particle swarm optimization method is used to further refine the global optimal system status in all system statuses. Since the particle swarm optimization method is a global optimization algorithm based on iterative computing, it can find the global optimal lane model by simulating the food finding way of fish school or insects under the mutual cooperation of all particles. In verification testing, the test environments included highways and ordinary roads as well as straight and curved lanes, uphill and downhill lanes, lane changes, etc. Our proposed method can complete the lane detection and tracking more accurately and effectively then existing options. PMID:23235453
Simulation and analysis of light scattering by multilamellar bodies present in the human eye.
Méndez-Aguilar, Emilia M; Kelly-Pérez, Ismael; Berriel-Valdos, L R; Delgado-Atencio, José A
2017-06-01
A modified computational model of the human eye was used to obtain and compare different probability density functions, radial profiles of light pattern distributions, and images of the point spread function formed in the human retina under the presence of different kinds of particles inside crystalline lenses suffering from cataracts. Specifically, this work uses simple particles without shells and multilamellar bodies (MLBs) with shells. The emergence of such particles alters the formation of images on the retina. Moreover, the MLBs change over time, which affects properties such as the refractive index of their shell. Hence, this work not only simulates the presence of such particles but also evaluates the incidence of particle parameters such as particle diameter, particle thickness, and shell refractive index, which are set based on reported experimental values. In addition, two wavelengths (400 nm and 700 nm) are used for light passing through the different layers of the computational model. The effects of these parameters on light scattering are analyzed using the simulation results. Further, in these results, the effects of light scattering on image formation can be seen when single particles, early-stage MLBs, or mature MLBs are incorporated in the model. Finally, it is found that particle diameter has the greatest impact on image formation.
Analytical theory of polymer-network-mediated interaction between colloidal particles
Di Michele, Lorenzo; Zaccone, Alessio; Eiser, Erika
2012-01-01
Nanostructured materials based on colloidal particles embedded in a polymer network are used in a variety of applications ranging from nanocomposite rubbers to organic-inorganic hybrid solar cells. Further, polymer-network-mediated colloidal interactions are highly relevant to biological studies whereby polymer hydrogels are commonly employed to probe the mechanical response of living cells, which can determine their biological function in physiological environments. The performance of nanomaterials crucially relies upon the spatial organization of the colloidal particles within the polymer network that depends, in turn, on the effective interactions between the particles in the medium. Existing models based on nonlocal equilibrium thermodynamics fail to clarify the nature of these interactions, precluding the way toward the rational design of polymer-composite materials. In this article, we present a predictive analytical theory of these interactions based on a coarse-grained model for polymer networks. We apply the theory to the case of colloids partially embedded in cross-linked polymer substrates and clarify the origin of attractive interactions recently observed experimentally. Monte Carlo simulation results that quantitatively confirm the theoretical predictions are also presented. PMID:22679289
Acid–base chemical reaction model for nucleation rates in the polluted atmospheric boundary layer
Chen, Modi; Titcombe, Mari; Jiang, Jingkun; Jen, Coty; Kuang, Chongai; Fischer, Marc L.; Eisele, Fred L.; Siepmann, J. Ilja; Hanson, David R.; Zhao, Jun; McMurry, Peter H.
2012-01-01
Climate models show that particles formed by nucleation can affect cloud cover and, therefore, the earth's radiation budget. Measurements worldwide show that nucleation rates in the atmospheric boundary layer are positively correlated with concentrations of sulfuric acid vapor. However, current nucleation theories do not correctly predict either the observed nucleation rates or their functional dependence on sulfuric acid concentrations. This paper develops an alternative approach for modeling nucleation rates, based on a sequence of acid–base reactions. The model uses empirical estimates of sulfuric acid evaporation rates obtained from new measurements of neutral molecular clusters. The model predicts that nucleation rates equal the sulfuric acid vapor collision rate times a prefactor that is less than unity and that depends on the concentrations of basic gaseous compounds and preexisting particles. Predicted nucleation rates and their dependence on sulfuric acid vapor concentrations are in reasonable agreement with measurements from Mexico City and Atlanta. PMID:23091030
Acid-base chemical reaction model for nucleation rates in the polluted atmospheric boundary layer.
Chen, Modi; Titcombe, Mari; Jiang, Jingkun; Jen, Coty; Kuang, Chongai; Fischer, Marc L; Eisele, Fred L; Siepmann, J Ilja; Hanson, David R; Zhao, Jun; McMurry, Peter H
2012-11-13
Climate models show that particles formed by nucleation can affect cloud cover and, therefore, the earth's radiation budget. Measurements worldwide show that nucleation rates in the atmospheric boundary layer are positively correlated with concentrations of sulfuric acid vapor. However, current nucleation theories do not correctly predict either the observed nucleation rates or their functional dependence on sulfuric acid concentrations. This paper develops an alternative approach for modeling nucleation rates, based on a sequence of acid-base reactions. The model uses empirical estimates of sulfuric acid evaporation rates obtained from new measurements of neutral molecular clusters. The model predicts that nucleation rates equal the sulfuric acid vapor collision rate times a prefactor that is less than unity and that depends on the concentrations of basic gaseous compounds and preexisting particles. Predicted nucleation rates and their dependence on sulfuric acid vapor concentrations are in reasonable agreement with measurements from Mexico City and Atlanta.
Numerical modeling of sorption kinetics of organic compounds to soil and sediment particles
NASA Astrophysics Data System (ADS)
Wu, Shian-chee; Gschwend, Phillip M.
1988-08-01
A numerical model is developed to simulate hydrophobic organic compound sorption kinetics, based on a retarded intraaggregate diffusion conceptualization of this solid-water exchange process. This model was used to ascertain the sensitivity of the sorption process for various sorbates to nonsteady solution concentrations and to polydisperse soil or sediment aggregate particle size distributions. Common approaches to modeling sorption kinetics amount to simplifications of our model and appear justified only when (1) the concentration fluctuations occur on a time scale which matches the sorption timescale of interest and (2) the particle size distribution is relatively narrow. Finally, a means is provided to estimate the extent of approach of a sorbing system to equilibrium as a function of aggregate size, chemical diffusivity and hydrophobicity, and system solids concentration.
NASA Astrophysics Data System (ADS)
Carlsson, Philip T. M.; Zeuch, Thomas
2018-03-01
We have developed a new model utilizing our existing kinetic gas phase models to simulate experimental particle size distributions emerging in dry supersaturated H2SO4 vapor homogeneously produced by rapid oxidation of SO2 through stabilized Criegee-Intermediates from 2-butene ozonolysis. We use a sectional method for simulating the particle dynamics. The particle treatment in the model is based on first principles and takes into account the transition from the kinetic to the diffusion-limited regime. It captures the temporal evolution of size distributions at the end of the ozonolysis experiment well, noting a slight underrepresentation of coagulation effects for larger particle sizes. The model correctly predicts the shape and the modes of the experimentally observed particle size distributions. The predicted modes show an extremely high sensitivity to the H2SO4 evaporation rates of the initially formed H2SO4 clusters (dimer to pentamer), which were arbitrarily restricted to decrease exponentially with increasing cluster size. In future, the analysis presented in this work can be extended to allow a direct validation of quantum chemically predicted stabilities of small H2SO4 clusters, which are believed to initiate a significant fraction of atmospheric new particle formation events. We discuss the prospects and possible limitations of the here presented approach.
New methods to detect particle velocity and mass flux in arc-heated ablation/erosion facilities
NASA Technical Reports Server (NTRS)
Brayton, D. B.; Bomar, B. W.; Seibel, B. L.; Elrod, P. D.
1980-01-01
Arc-heated flow facilities with injected particles are used to simulate the erosive and ablative/erosive environments encountered by spacecraft re-entry through fog, clouds, thermo-nuclear explosions, etc. Two newly developed particle diagnostic techniques used to calibrate these facilities are discussed. One technique measures particle velocity and is based on the detection of thermal radiation and/or chemiluminescence from the hot seed particles in a model ablation/erosion facility. The second technique measures a local particle rate, which is proportional to local particle mass flux, in a dust erosion facility by photodetecting and counting the interruptions of a focused laser beam by individual particles.
Paig-Tran, E W Misty; Bizzarro, Joseph J; Strother, James A; Summers, Adam P
2011-05-15
We created physical models based on the morphology of ram suspension-feeding fishes to better understand the roles morphology and swimming speed play in particle retention, size selectivity and filtration efficiency during feeding events. We varied the buccal length, flow speed and architecture of the gills slits, including the number, size, orientation and pore size/permeability, in our models. Models were placed in a recirculating flow tank with slightly negatively buoyant plankton-like particles (~20-2000 μm) collected at the simulated esophagus and gill rakers to locate the highest density of particle accumulation. Particles were captured through sieve filtration, direct interception and inertial impaction. Changing the number of gill slits resulted in a change in the filtration mechanism of particles from a bimodal filter, with very small (≤ 50 μm) and very large (>1000 μm) particles collected, to a filter that captured medium-sized particles (101-1000 μm). The number of particles collected on the gill rakers increased with flow speed and skewed the size distribution towards smaller particles (51-500 μm). Small pore sizes (105 and 200 μm mesh size) had the highest filtration efficiencies, presumably because sieve filtration played a significant role. We used our model to make predictions about the filtering capacity and efficiency of neonatal whale sharks. These results suggest that the filtration mechanics of suspension feeding are closely linked to an animal's swimming speed and the structural design of the buccal cavity and gill slits.
Freezing Transition Studies Through Constrained Cell Model Simulation
NASA Astrophysics Data System (ADS)
Nayhouse, Michael; Kwon, Joseph Sang-Il; Heng, Vincent R.; Amlani, Ankur M.; Orkoulas, G.
2014-10-01
In the present work, a simulation method based on cell models is used to deduce the fluid-solid transition of a system of particles that interact via a pair potential, , which is of the form with . The simulations are implemented under constant-pressure conditions on a generalized version of the constrained cell model. The constrained cell model is constructed by dividing the volume into Wigner-Seitz cells and confining each particle in a single cell. This model is a special case of a more general cell model which is formed by introducing an additional field variable that controls the number of particles per cell and, thus, the relative stability of the solid against the fluid phase. High field values force configurations with one particle per cell and thus favor the solid phase. Fluid-solid coexistence on the isotherm that corresponds to a reduced temperature of 2 is determined from constant-pressure simulations of the generalized cell model using tempering and histogram reweighting techniques. The entire fluid-solid phase boundary is determined through a thermodynamic integration technique based on histogram reweighting, using the previous coexistence point as a reference point. The vapor-liquid phase diagram is obtained from constant-pressure simulations of the unconstrained system using tempering and histogram reweighting. The phase diagram of the system is found to contain a stable critical point and a triple point. The phase diagram of the corresponding constrained cell model is also found to contain both a stable critical point and a triple point.
Brownian motion of arbitrarily shaped particles in two dimensions.
Chakrabarty, Ayan; Konya, Andrew; Wang, Feng; Selinger, Jonathan V; Sun, Kai; Wei, Qi-Huo
2014-11-25
We implement microfabricated boomerang particles with unequal arm lengths as a model for nonsymmetric particles and study their Brownian motion in a quasi-two-dimensional geometry by using high-precision single-particle motion tracking. We show that because of the coupling between translation and rotation, the mean squared displacements of a single asymmetric boomerang particle exhibit a nonlinear crossover from short-time faster to long-time slower diffusion, and the mean displacements for fixed initial orientation are nonzero and saturate out at long times. The measured anisotropic diffusion coefficients versus the tracking point position indicate that there exists one unique point, i.e., the center of hydrodynamic stress (CoH), at which all coupled diffusion coefficients vanish. This implies that in contrast to motion in three dimensions where the CoH exists only for high-symmetry particles, the CoH always exists for Brownian motion in two dimensions. We develop an analytical model based on Langevin theory to explain the experimental results and show that among the six anisotropic diffusion coefficients only five are independent because the translation-translation coupling originates from the translation-rotation coupling. Finally, we classify the behavior of two-dimensional Brownian motion of arbitrarily shaped particles into four groups based on the particle shape symmetry group and discussed potential applications of the CoH in simplifying understanding of the circular motions of microswimmers.
Pressure calculation in hybrid particle-field simulations
NASA Astrophysics Data System (ADS)
Milano, Giuseppe; Kawakatsu, Toshihiro
2010-12-01
In the framework of a recently developed scheme for a hybrid particle-field simulation techniques where self-consistent field (SCF) theory and particle models (molecular dynamics) are combined [J. Chem. Phys. 130, 214106 (2009)], we developed a general formulation for the calculation of instantaneous pressure and stress tensor. The expressions have been derived from statistical mechanical definition of the pressure starting from the expression for the free energy functional in the SCF theory. An implementation of the derived formulation suitable for hybrid particle-field molecular dynamics-self-consistent field simulations is described. A series of test simulations on model systems are reported comparing the calculated pressure with those obtained from standard molecular dynamics simulations based on pair potentials.
Adesina, Simeon K.; Wight, Scott A.; Akala, Emmanuel O.
2015-01-01
Purpose Nanoparticle size is important in drug delivery. Clearance of nanoparticles by cells of the reticuloendothelial system has been reported to increase with increase in particle size. Further, nanoparticles should be small enough to avoid lung or spleen filtering effects. Endocytosis and accumulation in tumor tissue by the enhanced permeability and retention effect are also processes that are influenced by particle size. We present the results of studies designed to optimize crosslinked biodegradable stealth polymeric nanoparticles fabricated by dispersion polymerization. Methods Nanoparticles were fabricated using different amounts of macromonomer, initiators, crosslinking agent and stabilizer in a dioxane/DMSO/water solvent system. Confirmation of nanoparticle formation was by scanning electron microscopy (SEM). Particle size was measured by dynamic light scattering (DLS). D-optimal mixture statistical experimental design was used for the experimental runs, followed by model generation (Scheffe polynomial) and optimization with the aid of a computer software. Model verification was done by comparing particle size data of some suggested solutions to the predicted particle sizes. Results and Conclusion Data showed that average particle sizes follow the same trend as predicted by the model. Negative terms in the model corresponding to the crosslinking agent and stabilizer indicate the important factors for minimizing particle size. PMID:24059281
Adesina, Simeon K; Wight, Scott A; Akala, Emmanuel O
2014-11-01
Nanoparticle size is important in drug delivery. Clearance of nanoparticles by cells of the reticuloendothelial system has been reported to increase with increase in particle size. Further, nanoparticles should be small enough to avoid lung or spleen filtering effects. Endocytosis and accumulation in tumor tissue by the enhanced permeability and retention effect are also processes that are influenced by particle size. We present the results of studies designed to optimize cross-linked biodegradable stealth polymeric nanoparticles fabricated by dispersion polymerization. Nanoparticles were fabricated using different amounts of macromonomer, initiators, crosslinking agent and stabilizer in a dioxane/DMSO/water solvent system. Confirmation of nanoparticle formation was by scanning electron microscopy (SEM). Particle size was measured by dynamic light scattering (DLS). D-optimal mixture statistical experimental design was used for the experimental runs, followed by model generation (Scheffe polynomial) and optimization with the aid of a computer software. Model verification was done by comparing particle size data of some suggested solutions to the predicted particle sizes. Data showed that average particle sizes follow the same trend as predicted by the model. Negative terms in the model corresponding to the cross-linking agent and stabilizer indicate the important factors for minimizing particle size.
A Comparison of Filter-based Approaches for Model-based Prognostics
NASA Technical Reports Server (NTRS)
Daigle, Matthew John; Saha, Bhaskar; Goebel, Kai
2012-01-01
Model-based prognostics approaches use domain knowledge about a system and its failure modes through the use of physics-based models. Model-based prognosis is generally divided into two sequential problems: a joint state-parameter estimation problem, in which, using the model, the health of a system or component is determined based on the observations; and a prediction problem, in which, using the model, the stateparameter distribution is simulated forward in time to compute end of life and remaining useful life. The first problem is typically solved through the use of a state observer, or filter. The choice of filter depends on the assumptions that may be made about the system, and on the desired algorithm performance. In this paper, we review three separate filters for the solution to the first problem: the Daum filter, an exact nonlinear filter; the unscented Kalman filter, which approximates nonlinearities through the use of a deterministic sampling method known as the unscented transform; and the particle filter, which approximates the state distribution using a finite set of discrete, weighted samples, called particles. Using a centrifugal pump as a case study, we conduct a number of simulation-based experiments investigating the performance of the different algorithms as applied to prognostics.
NASA Astrophysics Data System (ADS)
Yunxiao, CAO; Zhiqiang, WANG; Jinjun, WANG; Guofeng, LI
2018-05-01
Electrostatic separation has been extensively used in mineral processing, and has the potential to separate gangue minerals from raw talcum ore. As for electrostatic separation, the particle charging status is one of important influence factors. To describe the talcum particle charging status in a parallel plate electrostatic separator accurately, this paper proposes a modern images processing method. Based on the actual trajectories obtained from sequence images of particle movement and the analysis of physical forces applied on a charged particle, a numerical model is built, which could calculate the charge-to-mass ratios represented as the charging status of particle and simulate the particle trajectories. The simulated trajectories agree well with the experimental results obtained by images processing. In addition, chemical composition analysis is employed to reveal the relationship between ferrum gangue mineral content and charge-to-mass ratios. Research results show that the proposed method is effective for describing the particle charging status in electrostatic separation.
Electrohydrodynamic interactions in Quincke rotation: from pair dynamics to collective motion
NASA Astrophysics Data System (ADS)
Das, Debasish; Saintillan, David
2013-11-01
Weakly conducting dielectric particles suspended in a dielectric liquid can undergo spontaneous sustained rotation when placed in a sufficiently strong dc electric field. This phenomenon of Quincke rotation has interesting implications for the rheology of these suspensions whose effective viscosity can be reduced by application of an external field. While previous models based on the rotation of isolated particles have provided accurate estimates for this viscosity reduction in dilute suspensions discrepancies have been reported in more concentrated systems where particle-particle interactions are likely significant. Motivated by this observation we extend the classic description of Quincke rotation based on the Taylor-Melcher leaky dielectric model to account for pair electrohydrodynamic interactions between identical spheres using method of reflections. We also consider the case of spherical particles undergoing Quincke rotation next to a planar electrode, where hydrodynamic interactions with the no-slip boundary lead to a self-propelled velocity. The interactions between such Quincke rollers are analyzed, and a transition to collective motion is predicted in sufficiently dense collections of many rollers, in agreement with recent experiments.
Fully implicit Particle-in-cell algorithms for multiscale plasma simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacon, Luis
The outline of the paper is as follows: Particle-in-cell (PIC) methods for fully ionized collisionless plasmas, explicit vs. implicit PIC, 1D ES implicit PIC (charge and energy conservation, moment-based acceleration), and generalization to Multi-D EM PIC: Vlasov-Darwin model (review and motivation for Darwin model, conservation properties (energy, charge, and canonical momenta), and numerical benchmarks). The author demonstrates a fully implicit, fully nonlinear, multidimensional PIC formulation that features exact local charge conservation (via a novel particle mover strategy), exact global energy conservation (no particle self-heating or self-cooling), adaptive particle orbit integrator to control errors in momentum conservation, and canonical momenta (EM-PICmore » only, reduced dimensionality). The approach is free of numerical instabilities: ω peΔt >> 1, and Δx >> λ D. It requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant CPU gains (vs explicit PIC) have been demonstrated. The method has much potential for efficiency gains vs. explicit in long-time-scale applications. Moment-based acceleration is effective in minimizing N FE, leading to an optimal algorithm.« less
Gardiner, Bruce S.; Wong, Kelvin K. L.; Joldes, Grand R.; Rich, Addison J.; Tan, Chin Wee; Burgess, Antony W.; Smith, David W.
2015-01-01
This paper presents a framework for modelling biological tissues based on discrete particles. Cell components (e.g. cell membranes, cell cytoskeleton, cell nucleus) and extracellular matrix (e.g. collagen) are represented using collections of particles. Simple particle to particle interaction laws are used to simulate and control complex physical interaction types (e.g. cell-cell adhesion via cadherins, integrin basement membrane attachment, cytoskeletal mechanical properties). Particles may be given the capacity to change their properties and behaviours in response to changes in the cellular microenvironment (e.g., in response to cell-cell signalling or mechanical loadings). Each particle is in effect an ‘agent’, meaning that the agent can sense local environmental information and respond according to pre-determined or stochastic events. The behaviour of the proposed framework is exemplified through several biological problems of ongoing interest. These examples illustrate how the modelling framework allows enormous flexibility for representing the mechanical behaviour of different tissues, and we argue this is a more intuitive approach than perhaps offered by traditional continuum methods. Because of this flexibility, we believe the discrete modelling framework provides an avenue for biologists and bioengineers to explore the behaviour of tissue systems in a computational laboratory. PMID:26452000
Scattering and radiative properties of complex soot and soot-containing particles
NASA Astrophysics Data System (ADS)
Liu, L.; Mishchenko, M. I.; Mackowski, D. W.; Dlugach, J.
2012-12-01
Tropospheric soot and soot containing aerosols often exhibit nonspherical overall shapes and complex morphologies. They can externally, semi-externally, and internally mix with other aerosol species. This poses a tremendous challenge in particle characterization, remote sensing, and global climate modeling studies. To address these challenges, we used the new numerically exact public-domain Fortran-90 code based on the superposition T-matrix method (STMM) and other theoretical models to analyze the potential effects of aggregation and heterogeneity on light scattering and absorption by morphologically complex soot containing particles. The parameters we computed include the whole scattering matrix elements, linear depolarization ratios, optical cross-sections, asymmetry parameters, and single scattering albedos. It is shown that the optical characteristics of soot and soot containing aerosols very much depend on particle sizes, compositions, and aerosol overall shapes. The soot particle configurations and heterogeneities can have a substantial effect that can result in a significant enhancement of extinction and absorption relative to those computed from the Lorenz-Mie theory. Meanwhile the model calculated information combined with in-situ and remote sensed data can be used to constrain soot particle shapes and sizes which are much needed in climate models.
Gardiner, Bruce S; Wong, Kelvin K L; Joldes, Grand R; Rich, Addison J; Tan, Chin Wee; Burgess, Antony W; Smith, David W
2015-10-01
This paper presents a framework for modelling biological tissues based on discrete particles. Cell components (e.g. cell membranes, cell cytoskeleton, cell nucleus) and extracellular matrix (e.g. collagen) are represented using collections of particles. Simple particle to particle interaction laws are used to simulate and control complex physical interaction types (e.g. cell-cell adhesion via cadherins, integrin basement membrane attachment, cytoskeletal mechanical properties). Particles may be given the capacity to change their properties and behaviours in response to changes in the cellular microenvironment (e.g., in response to cell-cell signalling or mechanical loadings). Each particle is in effect an 'agent', meaning that the agent can sense local environmental information and respond according to pre-determined or stochastic events. The behaviour of the proposed framework is exemplified through several biological problems of ongoing interest. These examples illustrate how the modelling framework allows enormous flexibility for representing the mechanical behaviour of different tissues, and we argue this is a more intuitive approach than perhaps offered by traditional continuum methods. Because of this flexibility, we believe the discrete modelling framework provides an avenue for biologists and bioengineers to explore the behaviour of tissue systems in a computational laboratory.
NASA Astrophysics Data System (ADS)
Boger, A. A.; Ryazhskikh, V. I.; Slyusarev, M. I.
2012-01-01
Based on diffusion concepts of transfer of slightly concentrated polydisperse suspensions in the gravity field, we propose a mathematical model of the kinetics of deposition of such suspensions in a plane layer of a homogeneously mixed medium through the free surface of which Stokesian particles penetrate according to the rectangular pulse law.
Imposing a Lagrangian Particle Framework on an Eulerian Hydrodynamics Infrastructure in Flash
NASA Technical Reports Server (NTRS)
Dubey, A.; Daley, C.; ZuHone, J.; Ricker, P. M.; Weide, K.; Graziani, C.
2012-01-01
In many astrophysical simulations, both Eulerian and Lagrangian quantities are of interest. For example, in a galaxy cluster merger simulation, the intracluster gas can have Eulerian discretization, while dark matter can be modeled using particles. FLASH, a component-based scientific simulation code, superimposes a Lagrangian framework atop an adaptive mesh refinement Eulerian framework to enable such simulations. The discretization of the field variables is Eulerian, while the Lagrangian entities occur in many different forms including tracer particles, massive particles, charged particles in particle-in-cell mode, and Lagrangian markers to model fluid structure interactions. These widely varying roles for Lagrangian entities are possible because of the highly modular, flexible, and extensible architecture of the Lagrangian framework. In this paper, we describe the Lagrangian framework in FLASH in the context of two very different applications, Type Ia supernovae and galaxy cluster mergers, which use the Lagrangian entities in fundamentally different ways.
Imposing a Lagrangian Particle Framework on an Eulerian Hydrodynamics Infrastructure in FLASH
NASA Astrophysics Data System (ADS)
Dubey, A.; Daley, C.; ZuHone, J.; Ricker, P. M.; Weide, K.; Graziani, C.
2012-08-01
In many astrophysical simulations, both Eulerian and Lagrangian quantities are of interest. For example, in a galaxy cluster merger simulation, the intracluster gas can have Eulerian discretization, while dark matter can be modeled using particles. FLASH, a component-based scientific simulation code, superimposes a Lagrangian framework atop an adaptive mesh refinement Eulerian framework to enable such simulations. The discretization of the field variables is Eulerian, while the Lagrangian entities occur in many different forms including tracer particles, massive particles, charged particles in particle-in-cell mode, and Lagrangian markers to model fluid-structure interactions. These widely varying roles for Lagrangian entities are possible because of the highly modular, flexible, and extensible architecture of the Lagrangian framework. In this paper, we describe the Lagrangian framework in FLASH in the context of two very different applications, Type Ia supernovae and galaxy cluster mergers, which use the Lagrangian entities in fundamentally different ways.
Dust environment of an airless object: A phase space study with kinetic models
NASA Astrophysics Data System (ADS)
Kallio, E.; Dyadechkin, S.; Fatemi, S.; Holmström, M.; Futaana, Y.; Wurz, P.; Fernandes, V. A.; Álvarez, F.; Heilimo, J.; Jarvinen, R.; Schmidt, W.; Harri, A.-M.; Barabash, S.; Mäkelä, J.; Porjo, N.; Alho, M.
2016-01-01
The study of dust above the lunar surface is important for both science and technology. Dust particles are electrically charged due to impact of the solar radiation and the solar wind plasma and, therefore, they affect the plasma above the lunar surface. Dust is also a health hazard for crewed missions because micron and sub-micron sized dust particles can be toxic and harmful to the human body. Dust also causes malfunctions in mechanical devices and is therefore a risk for spacecraft and instruments on the lunar surface. Properties of dust particles above the lunar surface are not fully known. However, it can be stated that their large surface area to volume ratio due to their irregular shape, broken chemical bonds on the surface of each dust particle, together with the reduced lunar environment cause the dust particles to be chemically very reactive. One critical unknown factor is the electric field and the electric potential near the lunar surface. We have developed a modelling suite, Dusty Plasma Environments: near-surface characterisation and Modelling (DPEM), to study globally and locally dust environments of the Moon and other airless bodies. The DPEM model combines three independent kinetic models: (1) a 3D hybrid model, where ions are modelled as particles and electrons are modelled as a charged neutralising fluid, (2) a 2D electrostatic Particle-in-Cell (PIC) model where both ions and electrons are treated as particles, and (3) a 3D Monte Carlo (MC) model where dust particles are modelled as test particles. The three models are linked to each other unidirectionally; the hybrid model provides upstream plasma parameters to be used as boundary conditions for the PIC model which generates the surface potential for the MC model. We have used the DPEM model to study properties of dust particles injected from the surface of airless objects such as the Moon, the Martian moon Phobos and the asteroid RQ36. We have performed a (v0, m/q)-phase space study where the property of dust particles at different initial velocity (v0) and initial mass per charge (m/q) ratio were analysed. The study especially identifies regions in the phase space where the electric field within a non-quasineutral plasma region above the surface of the object, the Debye layer, becomes important compared with the gravitational force. Properties of the dust particles in the phase space region where the electric field plays an important role are studied by a 3D Monte Carlo model. The current DPEM modelling suite does not include models of how dust particles are initially injected from the surface. Therefore, the presented phase space study cannot give absolute 3D dust density distributions around the analysed airless objects. For that, an additional emission model is necessary, which determines how many dust particles are emitted at various places on the analysed (v0, m/q)-phase space. However, this study identifies phase space regions where the electric field within the Debye layer plays an important role for dust particles. Overall, the initial results indicate that when a realistic dust emission model is available, the unified lunar based DPEM modelling suite is a powerful tool to study globally and locally the dust environments of airless bodies such as planetary moons, Mercury, asteroids and non-active comets far from the Sun.
Samson, Shazwani; Basri, Mahiran; Fard Masoumi, Hamid Reza; Abdul Malek, Emilia; Abedi Karjiban, Roghayeh
2016-01-01
A predictive model of a virgin coconut oil (VCO) nanoemulsion system for the topical delivery of copper peptide (an anti-aging compound) was developed using an artificial neural network (ANN) to investigate the factors that influence particle size. Four independent variables including the amount of VCO, Tween 80: Pluronic F68 (T80:PF68), xanthan gum and water were the inputs whereas particle size was taken as the response for the trained network. Genetic algorithms (GA) were used to model the data which were divided into training sets, testing sets and validation sets. The model obtained indicated the high quality performance of the neural network and its capability to identify the critical composition factors for the VCO nanoemulsion. The main factor controlling the particle size was found out to be xanthan gum (28.56%) followed by T80:PF68 (26.9%), VCO (22.8%) and water (21.74%). The formulation containing copper peptide was then successfully prepared using optimum conditions and particle sizes of 120.7 nm were obtained. The final formulation exhibited a zeta potential lower than -25 mV and showed good physical stability towards centrifugation test, freeze-thaw cycle test and storage at temperature 25°C and 45°C. PMID:27383135
Samson, Shazwani; Basri, Mahiran; Fard Masoumi, Hamid Reza; Abdul Malek, Emilia; Abedi Karjiban, Roghayeh
2016-01-01
A predictive model of a virgin coconut oil (VCO) nanoemulsion system for the topical delivery of copper peptide (an anti-aging compound) was developed using an artificial neural network (ANN) to investigate the factors that influence particle size. Four independent variables including the amount of VCO, Tween 80: Pluronic F68 (T80:PF68), xanthan gum and water were the inputs whereas particle size was taken as the response for the trained network. Genetic algorithms (GA) were used to model the data which were divided into training sets, testing sets and validation sets. The model obtained indicated the high quality performance of the neural network and its capability to identify the critical composition factors for the VCO nanoemulsion. The main factor controlling the particle size was found out to be xanthan gum (28.56%) followed by T80:PF68 (26.9%), VCO (22.8%) and water (21.74%). The formulation containing copper peptide was then successfully prepared using optimum conditions and particle sizes of 120.7 nm were obtained. The final formulation exhibited a zeta potential lower than -25 mV and showed good physical stability towards centrifugation test, freeze-thaw cycle test and storage at temperature 25°C and 45°C.
Comparison of continuum and particle simulations of expanding rarefied flows
NASA Technical Reports Server (NTRS)
Lumpkin, Forrest E., III; Boyd, Iain D.; Venkatapathy, Ethiraj
1993-01-01
Comparisons of Navier-Stokes solutions and particle simulations for a simple two-dimensional model problem at a succession of altitudes are performed in order to assess the importance of rarefaction effects on the base flow region. In addition, an attempt is made to include 'Burnett-type' extensions to the Navier-Stokes constitutive relations. The model geometry consists of a simple blunted wedge with a 0.425 meter nose radius, a 70 deg cone half angle, a 1.7 meter base length, and a rounded shoulder. The working gas is monatomic with a molecular weight and viscosity similar to air and was chosen to focus the study on the continuum and particle methodologies rather than the implementation of thermo-chemical modeling. Three cases are investigated, all at Mach 29, with densities corresponding to altitudes of 92 km, 99 km, and 105 km. At the lowest altitude, Navier-Stokes solutions agree well with particle simulations. At the higher altitudes, the Navier-Stokes equations become less accurate. In particular, the Navier-Stokes equations and particle method predict substantially different flow turning angle in the wake near the after body. Attempts to achieve steady continuum solutions including 'Burnett-type' terms failed. Further research is required to determine whether the boundary conditions, the equations themselves, or other unknown causes led to this failure.
Oxidation and particle deposition modeling in plasma spraying of Ti-6Al-4V/SiC fiber composites
NASA Astrophysics Data System (ADS)
Cochelin, E.; Borit, F.; Frot, G.; Jeandin, M.; Decker, L.; Jeulin, D.; Taweel, B. Al; Michaud, V.; Noël, P.
1999-03-01
Plasma spraying is known to be a promising process for the manufacturing of Ti/SiC long-fiber composites. However, some improvements remain for this process to be applied in an industrial route. These include: oxygen contamination of the sprayed material through that of titanium particles before and during spraying, damage to fibers due to a high level of thermal stresses induced at the spraying stage, adequate deposition of titanium-base powder to achieve a low-porosity matrix and good impregnation of the fiber array. This article deals with work that resulted in a threefold study of the process. Oxidation was studied using electron microprobe analysis of elementary particles quenched and trapped into a closed box at various given flight distances. Oxygen diffusion phenomena within the particles are discussed from a preliminary theoretical approach coupled with experimental data. Isothermal and thermomechanical calculations were made using the ABAQUS code to determine stresses arising from contact of a liquid Ti-6Al-4V particle onto a SiC fiber. On the scale of the sprayed powder flow, a two-dimensional new type of model simulating the deposition of droplets onto a substrate was developed. This new type of model is based on a lattice-gas automaton that reproduces the hydrodynamical behavior of fluids.
Xi, Jinxiang; Kim, Jongwon; Si, Xiuhua A; Zhou, Yue
2013-01-01
The deposition of hygroscopic aerosols is highly complex in nature, which results from a cumulative effect of dynamic particle growth and the real-time size-specific deposition mechanisms. The objective of this study is to evaluate hygroscopic effects on the particle growth, transport, and deposition of nasally inhaled aerosols across a range of 0.2-2.5 μm in an adult image-based nose-throat model. Temperature and relative humidity fields were simulated using the LRN k-ω turbulence model and species transport model under a spectrum of thermo-humidity conditions. Particle growth and transport were simulated using a well validated Lagrangian tracking model coupled with a user-defined hygroscopic growth module. Results of this study indicate that the saturation level and initial particle size are the two major factors that determine the particle growth rate (d/d0), while the effect of inhalation flow rate is found to be not significant. An empirical correlation of condensation growth of nasally inhaled hygroscopic aerosols in adults has been developed based on a variety of thermo-humidity inhalation conditions. Significant elevated nasal depositions of hygroscopic aerosols could be induced by condensation growth for both sub-micrometer and small micrometer particulates. In particular, the deposition of initially 2.5 μm hygroscopic aerosols was observed to be 5-8 times that of inert particles under warm to hot saturated conditions. Results of this study have important implications in exposure assessment in hot humid environments, where much higher risks may be expected compared to normal conditions.
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Zeng, Ziqiang; Han, Bernard; Lei, Xiao
2013-07-01
This article presents a dynamic programming-based particle swarm optimization (DP-based PSO) algorithm for solving an inventory management problem for large-scale construction projects under a fuzzy random environment. By taking into account the purchasing behaviour and strategy under rules of international bidding, a multi-objective fuzzy random dynamic programming model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform fuzzy random parameters into fuzzy variables that are subsequently defuzzified by using an expected value operator with optimistic-pessimistic index. The iterative nature of the authors' model motivates them to develop a DP-based PSO algorithm. More specifically, their approach treats the state variables as hidden parameters. This in turn eliminates many redundant feasibility checks during initialization and particle updates at each iteration. Results and sensitivity analysis are presented to highlight the performance of the authors' optimization method, which is very effective as compared to the standard PSO algorithm.
A Particle Model Explaining Mass and Relativity in a Physical Way
NASA Astrophysics Data System (ADS)
Giese, Albrecht
Physicists' understanding of relativity and the way it is handled is up to present days dominated by the interpretation of Albert Einstein, who related relativity to specific properties of space and time. The principal alternative to Einstein's interpretation is based on a concept proposed by Hendrik A. Lorentz, which uses knowledge of classical physics alone to explain relativistic phenomena. In this paper, we will show that on the one hand the Lorentz-based interpretation provides a simpler mathematical way of arriving at the known results for both Special and General Relativity. On the other hand, it is able to solve problems which have remained open to this day. Furthermore, a particle model will be presented, based on Lorentzian relativity and the quantum mechanical concept of Louis de Broglie, which explains the origin of mass without the use of the Higgs mechanism. It is based on the finiteness of the speed of light and provides classical results for particle properties which are currently only accessible through quantum mechanics.
An new MHD/kinetic model for exploring energetic particle production in macro-scale systems
NASA Astrophysics Data System (ADS)
Drake, J. F.; Swisdak, M.; Dahlin, J. T.
2017-12-01
A novel MHD/kinetic model is being developed to explore magneticreconnection and particle energization in macro-scale systems such asthe solar corona and the outer heliosphere. The model blends the MHDdescription with a macro-particle description. The rationale for thismodel is based on the recent discovery that energetic particleproduction during magnetic reconnection is controlled by Fermireflection and Betatron acceleration and not parallel electricfields. Since the former mechanisms are not dependent on kineticscales such as the Debye length and the electron and ion inertialscales, a model that sheds these scales is sufficient for describingparticle acceleration in macro-systems. Our MHD/kinetic model includesmacroparticles laid out on an MHD grid that are evolved with the MHDfields. Crucially, the feedback of the energetic component on the MHDfluid is included in the dynamics. Thus, energy of the total system,the MHD fluid plus the energetic component, is conserved. The systemhas no kinetic scales and therefore can be implemented to modelenergetic particle production in macro-systems with none of theconstraints associated with a PIC model. Tests of the new model insimple geometries will be presented and potential applications will bediscussed.
Keeping speed and distance for aligned motion
NASA Astrophysics Data System (ADS)
Farkas, Illés J.; Kun, Jeromos; Jin, Yi; He, Gaoqi; Xu, Mingliang
2015-01-01
The cohesive collective motion (flocking, swarming) of autonomous agents is ubiquitously observed and exploited in both natural and man-made settings, thus, minimal models for its description are essential. In a model with continuous space and time we find that if two particles arrive symmetrically in a plane at a large angle, then (i) radial repulsion and (ii) linear self-propelling toward a fixed preferred speed are sufficient for them to depart at a smaller angle. For this local gain of momentum explicit velocity alignment is not necessary, nor are adhesion or attraction, inelasticity or anisotropy of the particles, or nonlinear drag. With many particles obeying these microscopic rules of motion we find that their spatial confinement to a square with periodic boundaries (which is an indirect form of attraction) leads to stable macroscopic ordering. As a function of the strength of added noise we see—at finite system sizes—a critical slowing down close to the order-disorder boundary and a discontinuous transition. After varying the density of particles at constant system size and varying the size of the system with constant particle density we predict that in the infinite system size (or density) limit the hysteresis loop disappears and the transition becomes continuous. We note that animals, humans, drones, etc., tend to move asynchronously and are often more responsive to motion than positions. Thus, for them velocity-based continuous models can provide higher precision than coordinate-based models. An additional characteristic and realistic feature of the model is that convergence to the ordered state is fastest at a finite density, which is in contrast to models applying (discontinuous) explicit velocity alignments and discretized time. To summarize, we find that the investigated model can provide a minimal description of flocking.
Keeping speed and distance for aligned motion.
Farkas, Illés J; Kun, Jeromos; Jin, Yi; He, Gaoqi; Xu, Mingliang
2015-01-01
The cohesive collective motion (flocking, swarming) of autonomous agents is ubiquitously observed and exploited in both natural and man-made settings, thus, minimal models for its description are essential. In a model with continuous space and time we find that if two particles arrive symmetrically in a plane at a large angle, then (i) radial repulsion and (ii) linear self-propelling toward a fixed preferred speed are sufficient for them to depart at a smaller angle. For this local gain of momentum explicit velocity alignment is not necessary, nor are adhesion or attraction, inelasticity or anisotropy of the particles, or nonlinear drag. With many particles obeying these microscopic rules of motion we find that their spatial confinement to a square with periodic boundaries (which is an indirect form of attraction) leads to stable macroscopic ordering. As a function of the strength of added noise we see--at finite system sizes--a critical slowing down close to the order-disorder boundary and a discontinuous transition. After varying the density of particles at constant system size and varying the size of the system with constant particle density we predict that in the infinite system size (or density) limit the hysteresis loop disappears and the transition becomes continuous. We note that animals, humans, drones, etc., tend to move asynchronously and are often more responsive to motion than positions. Thus, for them velocity-based continuous models can provide higher precision than coordinate-based models. An additional characteristic and realistic feature of the model is that convergence to the ordered state is fastest at a finite density, which is in contrast to models applying (discontinuous) explicit velocity alignments and discretized time. To summarize, we find that the investigated model can provide a minimal description of flocking.
Prediction of slope stability based on numerical modeling of stress–strain state of rocks
NASA Astrophysics Data System (ADS)
Kozhogulov Nifadyev, KCh, VI; Usmanov, SF
2018-03-01
The paper presents the developed technique for the estimation of rock mass stability based on the finite element modeling of stress–strain state of rocks. The modeling results on the pit wall landslide as a flow of particles along a sloped surface are described.
Modeling of Abrasion and Crushing of Unbound Granular Materials During Compaction
NASA Astrophysics Data System (ADS)
Ocampo, Manuel S.; Caicedo, Bernardo
2009-06-01
Unbound compacted granular materials are commonly used in engineering structures as layers in road pavements, railroad beds, highway embankments, and foundations. These structures are generally subjected to dynamic loading by construction operations, traffic and wheel loads. These repeated or cyclic loads cause abrasion and crushing of the granular materials. Abrasion changes a particle's shape, and crushing divides the particle into a mixture of many small particles of varying sizes. Particle breakage is important because the mechanical and hydraulic properties of these materials depend upon their grain size distribution. Therefore, it is important to evaluate the evolution of the grain size distribution of these materials. In this paper an analytical model for unbound granular materials is proposed in order to evaluate particle crushing of gravels and soils subjected to cyclic loads. The model is based on a Markov chain which describes the development of grading changes in the material as a function of stress levels. In the model proposed, each particle size is a state in the system, and the evolution of the material is the movement of particles from one state to another in n steps. Each step is a load cycle, and movement between states is possible with a transition probability. The crushing of particles depends on the mechanical properties of each grain and the packing density of the granular material. The transition probability was calculated using both the survival probability defined by Weibull and the compressible packing model developed by De Larrard. Material mechanical properties are considered using the Weibull probability theory. The size and shape of the grains, as well as the method of processing the packing density are considered using De Larrard's model. Results of the proposed analytical model show a good agreement with the experimental tests carried out using the gyratory compaction test.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Na; Zhang, Peng; Kang, Wei
Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters aremore » systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately.« less
Minimum requirements for predictive pore-network modeling of solute transport in micromodels
NASA Astrophysics Data System (ADS)
Mehmani, Yashar; Tchelepi, Hamdi A.
2017-10-01
Pore-scale models are now an integral part of analyzing fluid dynamics in porous materials (e.g., rocks, soils, fuel cells). Pore network models (PNM) are particularly attractive due to their computational efficiency. However, quantitative predictions with PNM have not always been successful. We focus on single-phase transport of a passive tracer under advection-dominated regimes and compare PNM with high-fidelity direct numerical simulations (DNS) for a range of micromodel heterogeneities. We identify the minimum requirements for predictive PNM of transport. They are: (a) flow-based network extraction, i.e., discretizing the pore space based on the underlying velocity field, (b) a Lagrangian (particle tracking) simulation framework, and (c) accurate transfer of particles from one pore throat to the next. We develop novel network extraction and particle tracking PNM methods that meet these requirements. Moreover, we show that certain established PNM practices in the literature can result in first-order errors in modeling advection-dominated transport. They include: all Eulerian PNMs, networks extracted based on geometric metrics only, and flux-based nodal transfer probabilities. Preliminary results for a 3D sphere pack are also presented. The simulation inputs for this work are made public to serve as a benchmark for the research community.
NASA Astrophysics Data System (ADS)
Tian, Jian
With the recently-developed particle-resolved model PartMC-MOSAIC, the mixing state and other physico-chemical properties of individual aerosol particles can be tracked as the particles undergo aerosol aging processes. However, existing PartMC-MOSAIC applications have mainly been based on idealized scenarios, and a link to real atmospheric measurement has not yet been established. In this thesis, we extend the capability of PartMC-MOSAIC and apply the model framework to three distinct scenarios with different environmental conditions to investigate the physical and chemical aging of aerosols in those environments. The first study is to investigate the evolution of particle mixing state and cloud condensation nuclei (CCN) activation properties in a ship plume. Comparisons of our results with observations from the QUANTIFY Study in 2007 in the English channel and the Gulf of Biscay showed that the model was able to reproduce the observed evolution of total number concentration and the vanishing of the nucleation mode consisting of sulfate particles. Further process analysis revealed that during the first hour after emission, dilution reduced the total number concentration by four orders of magnitude, while coagulation reduced it by an additional order of magnitude. Neglecting coagulation resulted in an overprediction of more than one order of magnitude in the number concentration of particles smaller than 40 nm at a plume age of 100 s. Coagulation also significantly altered the mixing state of the particles, leading to a continuum of internal mixtures of sulfate and black carbon. The impact of condensation on CCN concentrations depended on the supersaturation threshold at which CCN activity was evaluated. Nucleation was observed to have a limited impact on the CCN concentration in the ship plume we studied, but was sensitive to formation rates of secondary aerosol. For the second study we adapted PartMC to represent the aerosol evolution in an aerosol chamber, with the intention to use the model as a tool to interpret and guide chamber experiments in the future. We added chamber-specific processes to our model formulation such as wall loss due to particle diffusion and sedimentation, and dilution effects due to sampling. We also implemented a treatment of fractal particles to account for the morphology of agglomerates and its impact on aerosol dynamics. We verified the model with published results of self-similar size distributions, and validated the model using experimental data from an aerosol chamber. To this end we developed a fitting optimization approach to determine the best-estimate values for the wall loss parameters based on minimizing the l2-norm of the model errors of the number distribution. Obtaining the best fit required taking into account the non-spherical structure of the particle agglomerates. Our third study focuses on the implementation of volatility basis set (VBS) framework in PartMC-MOSAIC to investigate the chemical aging of organic aerosols in the atmosphere. The updated PartMC-MOSAIC model framework was used to simulate the evolution of aerosols in air trajectories initialized from CARES field campaign conducted in California in June 2010. The simulation results were compared with aircraft measurement data during the campaign. PartMC-MOSAIC was able to produce gas and aerosol concentrations at similar levels compared to the observation data. Moreover, the simulation with VBS enabled produced consistently more secondary organic aerosols (SOA). The investigation of particle mixing state revealed that the impact of VBS framework on particle mixing state is sensitive to the daylight exposure time. (Abstract shortened by ProQuest.).
Turbulent Combustion in SDF Explosions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhl, A L; Bell, J B; Beckner, V E
2009-11-12
A heterogeneous continuum model is proposed to describe the dispersion and combustion of an aluminum particle cloud in an explosion. It combines the gas-dynamic conservation laws for the gas phase with a continuum model for the dispersed phase, as formulated by Nigmatulin. Inter-phase mass, momentum and energy exchange are prescribed by phenomenological models. It incorporates a combustion model based on the mass conservation laws for fuel, air and products; source/sink terms are treated in the fast-chemistry limit appropriate for such gasdynamic fields, along with a model for mass transfer from the particle phase to the gas. The model takes intomore » account both the afterburning of the detonation products of the C-4 booster with air, and the combustion of the Al particles with air. The model equations were integrated by high-order Godunov schemes for both the gas and particle phases. Numerical simulations of the explosion fields from 1.5-g Shock-Dispersed-Fuel (SDF) charge in a 6.6 liter calorimeter were used to validate the combustion model. Then the model was applied to 10-kg Al-SDF explosions in a an unconfined height-of-burst explosion. Computed pressure histories are compared with measured waveforms. Differences are caused by physical-chemical kinetic effects of particle combustion which induce ignition delays in the initial reactive blast wave and quenching of reactions at late times. Current simulations give initial insights into such modeling issues.« less
Multiscale modeling of interfacial flow in particle-solidification front dynamics
NASA Astrophysics Data System (ADS)
Garvin, Justin
2005-11-01
Particle-solidification front interactions are important in many applications, such as metal-matrix composite manufacture, frost heaving in soils and cryopreservation. The typical length scale of the particles and the solidification fronts are of the order of microns. However, the force of interaction between the particle and the front typically arises when the gap between them is of the order of tens of nanometers. Thus, a multiscale approach is necessary to analyze particle-front interactions. Solving the Navier-Stokes equations to simulate the dynamics by including the nano-scale gap between the particle and the front would be impossible. Therefore, the microscale dynamics is solved using a level-set based Eulerian technique, while an embedded model is developed for solution in the nano-scale (but continuum) gap region. The embedded model takes the form of a lubrication equation with disjoining pressure acting as a body force and is coupled to the outer solution. A particle is pushed by the front when the disjoining pressure is balanced by the viscous drag. The results obtained show that this balance can only occur when the thermal conductivity ratio of the particle to the melt is less than 1.0. The velocity of the front at which the particle pushing/engulfment transition occurs is predicted. In addition, this novel method allows for an in-depth analysis of the flow physics that cause particle pushing/engulfment.
A jellium model of a catalyst particle in carbon nanotube growth
NASA Astrophysics Data System (ADS)
Artyukhov, Vasilii I.; Liu, Mingjie; Penev, Evgeni S.; Yakobson, Boris I.
2017-06-01
We show how a jellium model can represent a catalyst particle within the density-functional theory based approaches to the growth mechanism of carbon nanotubes (CNTs). The advantage of jellium is an abridged, less computationally taxing description of the multi-atom metal particle, while at the same time in avoiding the uncertainty of selecting a particular atomic geometry of either a solid or ever-changing liquid catalyst particle. A careful choice of jellium sphere size and its electron density as a descriptive parameter allows one to calculate the CNT-metal interface energies close to explicit full atomistic models. Further, we show that using jellium permits computing and comparing the formation of topological defects (sole pentagons or heptagons, the culprits of growth termination) as well as pentagon-heptagon pairs 5|7 (known as chirality-switching dislocation).
Particle acceleration at shocks in the inner heliosphere
NASA Astrophysics Data System (ADS)
Parker, Linda Neergaard
This dissertation describes a study of particle acceleration at shocks via the diffusive shock acceleration mechanism. Results for particle acceleration at both quasi-parallel and quasi-perpendicular shocks are presented to address the question of whether there are sufficient particles in the solar wind thermal core, modeled as either a Maxwellian or kappa- distribution, to account for the observed accelerated spectrum. Results of accelerating the theoretical upstream distribution are compared to energetic observations at 1 AU. It is shown that the particle distribution in the solar wind thermal core is sufficient to explain the accelerated particle spectrum downstream of the shock, although the shape of the downstream distribution in some cases does not follow completely the theory of diffusive shock acceleration, indicating possible additional processes at work in the shock for these cases. Results show good to excellent agreement between the theoretical and observed spectral index for one third to one half of both quasi-parallel and quasi-perpendicular shocks studied herein. Coronal mass ejections occurring during periods of high solar activity surrounding solar maximum can produce shocks in excess of 3-8 shocks per day. During solar minimum, diffusive shock acceleration at shocks can generally be understood on the basis of single independent shocks and no other shock necessarily influences the diffusive shock acceleration mechanism. In this sense, diffusive shock acceleration during solar minimum may be regarded as Markovian. By contrast, diffusive shock acceleration of particles at periods of high solar activity (e.g. solar maximum) see frequent, closely spaced shocks that include the effects of particle acceleration at preceding and following shocks. Therefore, diffusive shock acceleration of particles at solar maximum cannot be modeled on the basis of diffusive shock acceleration as a single, independent shock and the process is essentially non-Markovian. A multiple shock model is developed based in part on the box model of (Protheroe and Stanev, 1998; Moraal and Axford, 1983; Ball and Kirk, 1992; Drury et al. 1999) that accelerates particles at multiple shocks and decompresses the particles between shocks via two methods. The first method of decompression is based on the that used by Melrose and Pope (1993), which adiabatically decompresses particles between shocks. The second method solves the cosmic ray transport equation and adiabatically decompresses between shocks and includes the loss of particles through convection and diffusion. The transport method allows for the inclusion of a temporal variability and thus allows for a more representative frequency distribution of shocks. The transport method of decompression and loss is used to accelerate particles at seventy-three shocks in a thirty day time period. Comparisons with observations taken at 1 AU during the same time period are encouraging as the model is able to reproduce the observed amplitude of the accelerated particles and in part the variability. This work provides the basis for developing more sophisticated models that can be applied to a suite of observations
Size resolved ultrafine particles emission model--a continues size distribution approach.
Nikolova, Irina; Janssen, Stijn; Vrancken, Karl; Vos, Peter; Mishra, Vinit; Berghmans, Patrick
2011-08-15
A new parameterization for size resolved ultrafine particles (UFP) traffic emissions is proposed based on the results of PARTICULATES project (Samaras et al., 2005). It includes the emission factors from the Emission Inventory Guidebook (2006) (total number of particles, #/km/veh), the shape of the corresponding particle size distribution given in PARTICULATES and data for the traffic activity. The output of the model UFPEM (UltraFine Particle Emission Model) is a sum of continuous distributions of ultrafine particles emissions per vehicle type (passenger cars and heavy duty vehicles), fuel (petrol and diesel) and average speed representative for urban, rural and highway driving. The results from the parameterization are compared with measured total number of ultrafine particles and size distributions in a tunnel in Antwerp (Belgium). The measured UFP concentration over the entire campaign shows a close relation to the traffic activity. The modelled concentration is found to be lower than the measured in the campaign. The average emission factor from the measurement is 4.29E+14 #/km/veh whereas the calculated is around 30% lower. A comparison of emission factors with literature is done as well and in overall a good agreement is found. For the size distributions it is found that the measured distributions consist of three modes--Nucleation, Aitken and accumulation and most of the ultrafine particles belong to the Nucleation and the Aitken modes. The modelled Aitken mode (peak around 0.04-0.05 μm) is found in a good agreement both as amplitude of the peak and the number of particles whereas the modelled Nucleation mode is shifted to smaller diameters and the peak is much lower that the observed. Time scale analysis shows that at 300 m in the tunnel coagulation and deposition are slow and therefore neglected. The UFPEM emission model can be used as a source term in dispersion models. Copyright © 2011 Elsevier B.V. All rights reserved.
Cederwall, R T; Peterson, K R
1990-11-01
A three-dimensional atmospheric transport and diffusion model is used to calculate the arrival and deposition of fallout from 13 selected nuclear tests at the Nevada Test Site (NTS) in the 1950s. Results are used to extend NTS fallout patterns to intermediate downwind distances (300 to 1200 km). The radioactive cloud is represented in the model by a population of Lagrangian marker particles, with concentrations calculated on an Eulerian grid. Use of marker particles, with fall velocities dependent on particle size, provides a realistic simulation of fallout as the debris cloud travels downwind. The three-dimensional wind field is derived from observed data, adjusted for mass consistency. Terrain is represented in the grid, which extends up to 1200 km downwind of NTS and has 32-km horizontal resolution and 1-km vertical resolution. Ground deposition is calculated by a deposition-velocity approach. Source terms and relationships between deposition and exposure rate are based on work by Hicks. Uncertainty in particle size and vertical distributions within the debris cloud (and stem) allow for some model "tuning" to better match measured ground-deposition values. Particle trajectories representing different sizes and starting heights above ground zero are used to guide source specification. An hourly time history of the modeled fallout pattern as the debris cloud moves downwind provides estimates of fallout arrival times. Results for event HARRY illustrate the methodology. The composite deposition pattern for all 13 tests is characterized by two lobes extending out to the north-northeast and east-northeast, respectively, at intermediate distances from NTS. Arrival estimates, along with modeled deposition values, augment measured deposition data in the development of data bases at the county level; these data bases are used for estimating radiation exposure at intermediate distances downwind of NTS. Results from a study of event TRINITY are also presented.
NASA Astrophysics Data System (ADS)
dell'Erba, Ramiro
2018-04-01
In a previous work, we considered a two-dimensional lattice of particles and calculated its time evolution by using an interaction law based on the spatial position of the particles themselves. The model reproduced the behaviour of deformable bodies both according to the standard Cauchy model and second gradient theory; this success led us to use this method in more complex cases. This work is intended as the natural evolution of the previous one in which we shall consider both energy aspects, coherence with the principle of Saint Venant and we start to manage a more general tool that can be adapted to different physical phenomena, supporting complex effects like lateral contraction, anisotropy or elastoplasticity.
Advanced Wide-Field Interferometric Microscopy for Nanoparticle Sensing and Characterization
NASA Astrophysics Data System (ADS)
Avci, Oguzhan
Nanoparticles have a key role in today's biotechnological research owing to the rapid advancement of nanotechnology. While metallic, polymer, and semiconductor based artificial nanoparticles are widely used as labels or targeted drug delivery agents, labeled and label-free detection of natural nanoparticles promise new ways for viral diagnostics and therapeutic applications. The increasing impact of nanoparticles in bio- and nano-technology necessitates the development of advanced tools for their accurate detection and characterization. Optical microscopy techniques have been an essential part of research for visualizing micron-scale particles. However, when it comes to the visualization of individual nano-scale particles, they have shown inadequate success due to the resolution and visibility limitations. Interferometric microscopy techniques have gained significant attention for providing means to overcome the nanoparticle visibility issue that is often the limiting factor in the imaging techniques based solely on the scattered light. In this dissertation, we develop a rigorous physical model to simulate the single nanoparticle optical response in a common-path wide-field interferometric microscopy (WIM) system. While the fundamental elements of the model can be used to analyze nanoparticle response in any generic wide-field imaging systems, we focus on imaging with a layered substrate (common-path interferometer) where specular reflection of illumination provides the reference light for interferometry. A robust physical model is quintessential in realizing the full potential of an optical system, and throughout this dissertation, we make use of it to benchmark our experimental findings, investigate the utility of various optical configurations, reconstruct weakly scattering nanoparticle images, as well as to characterize and discriminate interferometric nanoparticle responses. This study investigates the integration of advanced optical schemes in WIM with two main goals in mind: (i) increasing the visibility of low-index nanoscale particles via pupil function engineering, pushing the limit of sensitivity; (ii) improving the resolution of sub-diffraction-limited, low-index particle images in WIM via reconstruction strategies for shape and orientation information. We successfully demonstrate an overall ten-fold improvement in the visibility of the low-index sub-wavelength nanoparticles as well as up to two-fold extended spatial resolution of the interference-enhanced nanoparticle images. We also systematically examine the key factors that determine the signal in WIM. These factors include the particle type, size, layered substrate design, defocus and nanoparticle polarizability. We use the physical model to demonstrate how these factors determine the signal levels, and demonstrate how the layered substrate can be designed to optimize the overall signal, while defocus scan can be used to maximize it, as well as its signature can be utilized for particle discrimination purposes for both dielectric particles and resonant metallic particles. We introduce a machine learning based particle characterization algorithm that relies on supervised learning from model. The particle characterization is limited to discrimination based on nanosphere size and type in the scope of this dissertation.
Entropy-based separation of yeast cells using a microfluidic system of conjoined spheres
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Kai-Jian; Qin, S.-J., E-mail: shuijie.qin@gmail.com; Bai, Zhong-Chen
2013-11-21
A physical model is derived to create a biological cell separator that is based on controlling the entropy in a microfluidic system having conjoined spherical structures. A one-dimensional simplified model of this three-dimensional problem in terms of the corresponding effects of entropy on the Brownian motion of particles is presented. This dynamic mechanism is based on the Langevin equation from statistical thermodynamics and takes advantage of the characteristics of the Fokker-Planck equation. This mechanism can be applied to manipulate biological particles inside a microfluidic system with identical, conjoined, spherical compartments. This theoretical analysis is verified by performing a rapid andmore » a simple technique for separating yeast cells in these conjoined, spherical microfluidic structures. The experimental results basically match with our theoretical model and we further analyze the parameters which can be used to control this separation mechanism. Both numerical simulations and experimental results show that the motion of the particles depends on the geometrical boundary conditions of the microfluidic system and the initial concentration of the diffusing material. This theoretical model can be implemented in future biophysics devices for the optimized design of passive cell sorters.« less
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Siegel, Andrew R.
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
Romano, Paul K.; Siegel, Andrew R.
2017-07-01
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less
A Computational Fluid Dynamic Model for a Novel Flash Ironmaking Process
NASA Astrophysics Data System (ADS)
Perez-Fontes, Silvia E.; Sohn, Hong Yong; Olivas-Martinez, Miguel
A computational fluid dynamic model for a novel flash ironmaking process based on the direct gaseous reduction of iron oxide concentrates is presented. The model solves the three-dimensional governing equations including both gas-phase and gas-solid reaction kinetics. The turbulence-chemistry interaction in the gas-phase is modeled by the eddy dissipation concept incorporating chemical kinetics. The particle cloud model is used to track the particle phase in a Lagrangian framework. A nucleation and growth kinetics rate expression is adopted to calculate the reduction rate of magnetite concentrate particles. Benchmark experiments reported in the literature for a nonreacting swirling gas jet and a nonpremixed hydrogen jet flame were simulated for validation. The model predictions showed good agreement with measurements in terms of gas velocity, gas temperature and species concentrations. The relevance of the computational model for the analysis of a bench reactor operation and the design of an industrial-pilot plant is discussed.
Neural Networks for Modeling and Control of Particle Accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, A. L.; Biedron, S. G.; Chase, B. E.
Myriad nonlinear and complex physical phenomena are host to particle accelerators. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems,more » as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Moreover, many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. For the purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We also describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.« less
Neural Networks for Modeling and Control of Particle Accelerators
NASA Astrophysics Data System (ADS)
Edelen, A. L.; Biedron, S. G.; Chase, B. E.; Edstrom, D.; Milton, S. V.; Stabile, P.
2016-04-01
Particle accelerators are host to myriad nonlinear and complex physical phenomena. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems, as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. The purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.
Neural Networks for Modeling and Control of Particle Accelerators
Edelen, A. L.; Biedron, S. G.; Chase, B. E.; ...
2016-04-01
Myriad nonlinear and complex physical phenomena are host to particle accelerators. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems,more » as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Moreover, many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. For the purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We also describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.« less
Impact of nongray multiphase radiation in pulverized coal combustion
NASA Astrophysics Data System (ADS)
Roy, Somesh; Wu, Bifen; Modest, Michael; Zhao, Xinyu
2016-11-01
Detailed modeling of radiation is important for accurate modeling of pulverized coal combustion. Because of high temperature and optical properties, radiative heat transfer from coal particles is often more dominant than convective heat transfer. In this work a multiphase photon Monte Carlo radiation solver is used to investigate and to quantify the effect of nongray radiation in a laboratory-scale pulverized coal flame. The nongray radiative properties of carrier phase (gas) is modeled using HITEMP database. Three major species - CO, CO2, and H2O - are treated as participating gases. Two optical models are used to evaluate radiative properties of coal particles: a formulation based on the large particle limit and a size-dependent correlation. Effect of scattering due to coal particle is also investigated using both isotropic scattering and anisotropic scattering using a Henyey-Greenstein function. Lastly, since the optical properties of ash is very different from that of coal, the effect of ash content on the radiative properties of coal particle is examined. This work used Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation Grant Number ACI-1053575.
Calibration of micromechanical parameters for DEM simulations by using the particle filter
NASA Astrophysics Data System (ADS)
Cheng, Hongyang; Shuku, Takayuki; Thoeni, Klaus; Yamamoto, Haruyuki
2017-06-01
The calibration of DEM models is typically accomplished by trail and error. However, the procedure lacks of objectivity and has several uncertainties. To deal with these issues, the particle filter is employed as a novel approach to calibrate DEM models of granular soils. The posterior probability distribution of the microparameters that give numerical results in good agreement with the experimental response of a Toyoura sand specimen is approximated by independent model trajectories, referred as `particles', based on Monte Carlo sampling. The soil specimen is modeled by polydisperse packings with different numbers of spherical grains. Prepared in `stress-free' states, the packings are subjected to triaxial quasistatic loading. Given the experimental data, the posterior probability distribution is incrementally updated, until convergence is reached. The resulting `particles' with higher weights are identified as the calibration results. The evolutions of the weighted averages and posterior probability distribution of the micro-parameters are plotted to show the advantage of using a particle filter, i.e., multiple solutions are identified for each parameter with known probabilities of reproducing the experimental response.
Effect of surface roughness on substrate-tuned gold nanoparticle gap plasmon resonances.
Lumdee, Chatdanai; Yun, Binfeng; Kik, Pieter G
2015-03-07
The effect of nanoscale surface roughness on the gap plasmon resonance of gold nanoparticles on thermally evaporated gold films is investigated experimentally and numerically. Single-particle scattering spectra obtained from 80 nm diameter gold particles on a gold film show significant particle-to-particle variation of the peak scattering wavelength of ±28 nm. The experimental results are compared with numerical simulations of gold nanoparticles positioned on representative rough gold surfaces, modeled based on atomic force microscopy measurements. The predicted spectral variation and average resonance wavelength show good agreement with the measured data. The study shows that nanometer scale surface roughness can significantly affect the performance of gap plasmon-based devices.
A hybrid method with deviational particles for spatial inhomogeneous plasma
NASA Astrophysics Data System (ADS)
Yan, Bokai
2016-03-01
In this work we propose a Hybrid method with Deviational Particles (HDP) for a plasma modeled by the inhomogeneous Vlasov-Poisson-Landau system. We split the distribution into a Maxwellian part evolved by a grid based fluid solver and a deviation part simulated by numerical particles. These particles, named deviational particles, could be both positive and negative. We combine the Monte Carlo method proposed in [31], a Particle in Cell method and a Macro-Micro decomposition method [3] to design an efficient hybrid method. Furthermore, coarse particles are employed to accelerate the simulation. A particle resampling technique on both deviational particles and coarse particles is also investigated and improved. This method is applicable in all regimes and significantly more efficient compared to a PIC-DSMC method near the fluid regime.
Model-Based Fatigue Prognosis of Fiber-Reinforced Laminates Exhibiting Concurrent Damage Mechanisms
NASA Technical Reports Server (NTRS)
Corbetta, M.; Sbarufatti, C.; Saxena, A.; Giglio, M.; Goebel, K.
2016-01-01
Prognostics of large composite structures is a topic of increasing interest in the field of structural health monitoring for aerospace, civil, and mechanical systems. Along with recent advancements in real-time structural health data acquisition and processing for damage detection and characterization, model-based stochastic methods for life prediction are showing promising results in the literature. Among various model-based approaches, particle-filtering algorithms are particularly capable in coping with uncertainties associated with the process. These include uncertainties about information on the damage extent and the inherent uncertainties of the damage propagation process. Some efforts have shown successful applications of particle filtering-based frameworks for predicting the matrix crack evolution and structural stiffness degradation caused by repetitive fatigue loads. Effects of other damage modes such as delamination, however, are not incorporated in these works. It is well established that delamination and matrix cracks not only co-exist in most laminate structures during the fatigue degradation process but also affect each other's progression. Furthermore, delamination significantly alters the stress-state in the laminates and accelerates the material degradation leading to catastrophic failure. Therefore, the work presented herein proposes a particle filtering-based framework for predicting a structure's remaining useful life with consideration of multiple co-existing damage-mechanisms. The framework uses an energy-based model from the composite modeling literature. The multiple damage-mode model has been shown to suitably estimate the energy release rate of cross-ply laminates as affected by matrix cracks and delamination modes. The model is also able to estimate the reduction in stiffness of the damaged laminate. This information is then used in the algorithms for life prediction capabilities. First, a brief summary of the energy-based damage model is provided. Then, the paper describes how the model is embedded within the prognostic framework and how the prognostics performance is assessed using observations from run-to-failure experiments
NASA Astrophysics Data System (ADS)
Roth, Steven; Oakes, Jessica; Shadden, Shawn
2015-11-01
Particle deposition in the human lungs can occur with every breathe. Airbourne particles can range from toxic constituents (e.g. tobacco smoke and air pollution) to aerosolized particles designed for drug treatment (e.g. insulin to treat diabetes). The effect of various realistic airway geometries on complex flow structures, and thus particle deposition sites, has yet to be extensively investigated using computational fluid dynamics (CFD). In this work, we created an image-based geometric airway model of the human lung and performed CFD simulations by employing multi-domain methods. Following the flow simulations, Lagrangian particle tracking was used to study the effect of cross-sectional shape on deposition sites in the conducting airways. From a single human lung model, the cross-sectional ellipticity (the ratio of major and minor diameters) of the left and right main bronchi was varied systematically from 2:1 to 1:1. The influence of the airway ellipticity on the surrounding flow field and particle deposition was determined.
Laminar flow effects in the coil planet centrifuge
NASA Technical Reports Server (NTRS)
Herrmann, F. T.
1984-01-01
The coil planet centrifuge designed by Ito employs flow of a single liquid phase, through a rotating coiled tube in a centrifugal force field, to provide a separation of particles based on sedimentation rates. Mathematical solutions are derived for the linear differential equations governing particle behavior in the coil planet centrifuge device. These solutions are then applied as the basis of a model for optimizing particle separations.
Yates, Christian A; Flegg, Mark B
2015-05-06
Spatial reaction-diffusion models have been employed to describe many emergent phenomena in biological systems. The modelling technique most commonly adopted in the literature implements systems of partial differential equations (PDEs), which assumes there are sufficient densities of particles that a continuum approximation is valid. However, owing to recent advances in computational power, the simulation and therefore postulation, of computationally intensive individual-based models has become a popular way to investigate the effects of noise in reaction-diffusion systems in which regions of low copy numbers exist. The specific stochastic models with which we shall be concerned in this manuscript are referred to as 'compartment-based' or 'on-lattice'. These models are characterized by a discretization of the computational domain into a grid/lattice of 'compartments'. Within each compartment, particles are assumed to be well mixed and are permitted to react with other particles within their compartment or to transfer between neighbouring compartments. Stochastic models provide accuracy, but at the cost of significant computational resources. For models that have regions of both low and high concentrations, it is often desirable, for reasons of efficiency, to employ coupled multi-scale modelling paradigms. In this work, we develop two hybrid algorithms in which a PDE in one region of the domain is coupled to a compartment-based model in the other. Rather than attempting to balance average fluxes, our algorithms answer a more fundamental question: 'how are individual particles transported between the vastly different model descriptions?' First, we present an algorithm derived by carefully redefining the continuous PDE concentration as a probability distribution. While this first algorithm shows very strong convergence to analytical solutions of test problems, it can be cumbersome to simulate. Our second algorithm is a simplified and more efficient implementation of the first, it is derived in the continuum limit over the PDE region alone. We test our hybrid methods for functionality and accuracy in a variety of different scenarios by comparing the averaged simulations with analytical solutions of PDEs for mean concentrations. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Modeling particle nucleation and growth over northern California during the 2010 CARES campaign
NASA Astrophysics Data System (ADS)
Lupascu, A.; Easter, R.; Zaveri, R.; Shrivastava, M.; Pekour, M.; Tomlinson, J.; Yang, Q.; Matsui, H.; Hodzic, A.; Zhang, Q.; Fast, J. D.
2015-07-01
Accurate representation of the aerosol lifecycle requires adequate modeling of the particle number concentration and size distribution in addition to their mass, which is often the focus of aerosol modeling studies. This paper compares particle number concentrations and size distributions as predicted by three empirical nucleation parameterizations in the Weather Research and Forecast coupled with chemistry (WRF-Chem) regional model using 20 discrete size bins ranging from 1 nm to 10 μm. Two of the parameterizations are based on H2SO4 while one is based on both H2SO4 and organic vapors. Budget diagnostic terms for transport, dry deposition, emissions, condensational growth, nucleation, and coagulation of aerosol particles have been added to the model and are used to analyze the differences in how the new particle formation parameterizations influence the evolving aerosol size distribution. The simulations are evaluated using measurements collected at surface sites and from a research aircraft during the Carbonaceous Aerosol and Radiative Effects Study (CARES) conducted in the vicinity of Sacramento, California. While all three parameterizations captured the temporal variation of the size distribution during observed nucleation events as well as the spatial variability in aerosol number, all overestimated by up to a factor of 2.5 the total particle number concentration for particle diameters greater than 10 nm. Using the budget diagnostic terms, we demonstrate that the combined H2SO4 and low-volatility organic vapors parameterization leads to a different diurnal variability of new particle formation and growth to larger sizes compared to the parameterizations based on only H2SO4. At the CARES urban ground site, peak nucleation rates were predicted to occur around 12:00 Pacific (local) standard time (PST) for the H2SO4 parameterizations, whereas the highest rates were predicted at 08:00 and 16:00 PST when low-volatility organic gases are included in the parameterization. This can be explained by higher anthropogenic emissions of organic vapors at these times as well as lower boundary layer heights that reduce vertical mixing. The higher nucleation rates in the H2SO4-organic parameterization at these times were largely offset by losses due to coagulation. Despite the different budget terms for ultrafine particles, the 10-40 nm diameter particle number concentrations from all three parameterizations increased from 10:00 to 14:00 PST and then decreased later in the afternoon, consistent with changes in the observed size and number distribution. Differences among the three simulations for the 40-100 nm particle diameter range are mostly associated with the timing of the peak total tendencies that shift the morning increase and afternoon decrease in particle number concentration by up to two hours. We found that newly formed particles could explain up to 20-30 % of predicted cloud condensation nuclei at 0.5 % supersaturation, depending on location and the specific nucleation parameterization. A sensitivity simulation using 12 discrete size bins ranging from 1 nm to 10 μm diameter gave a reasonable estimate of particle number and size distribution compared to the 20 size bin simulation, while reducing the associated computational cost by ∼ 36 %.
Ups and Downs in the Ocean: Effects of Biofouling on Vertical Transport of Microplastics.
Kooi, Merel; Nes, Egbert H van; Scheffer, Marten; Koelmans, Albert A
2017-07-18
Recent studies suggest size-selective removal of small plastic particles from the ocean surface, an observation that remains unexplained. We studied one of the hypotheses regarding this size-selective removal: the formation of a biofilm on the microplastics (biofouling). We developed the first theoretical model that is capable of simulating the effect of biofouling on the fate of microplastic. The model is based on settling, biofilm growth, and ocean depth profiles for light, water density, temperature, salinity, and viscosity. Using realistic parameters, the model simulates the vertical transport of small microplastic particles over time, and predicts that the particles either float, sink to the ocean floor, or oscillate vertically, depending on the size and density of the particle. The predicted size-dependent vertical movement of microplastic particles results in a maximum concentration at intermediate depths. Consequently, relatively low abundances of small particles are predicted at the ocean surface, while at the same time these small particles may never reach the ocean floor. Our results hint at the fate of "lost" plastic in the ocean, and provide a start for predicting risks of exposure to microplastics for potentially vulnerable species living at these depths.
Embryo as an active granular fluid: stress-coordinated cellular constriction chains
NASA Astrophysics Data System (ADS)
Gao, Guo-Jie Jason; Holcomb, Michael C.; Thomas, Jeffrey H.; Blawzdziewicz, Jerzy
2016-10-01
Mechanical stress plays an intricate role in gene expression in individual cells and sculpting of developing tissues. However, systematic methods of studying how mechanical stress and feedback help to harmonize cellular activities within a tissue have yet to be developed. Motivated by our observation of the cellular constriction chains (CCCs) during the initial phase of ventral furrow formation in the Drosophila melanogaster embryo, we propose an active granular fluid (AGF) model that provides valuable insights into cellular coordination in the apical constriction process. In our model, cells are treated as circular particles connected by a predefined force network, and they undergo a random constriction process in which the particle constriction probability P is a function of the stress exerted on the particle by its neighbors. We find that when P favors tensile stress, constricted particles tend to form chain-like structures. In contrast, constricted particles tend to form compact clusters when P favors compression. A remarkable similarity of constricted-particle chains and CCCs observed in vivo provides indirect evidence that tensile-stress feedback coordinates the apical constriction activity. Our particle-based AGF model will be useful in analyzing mechanical feedback effects in a wide variety of morphogenesis and organogenesis phenomena.
Analytical investigation of the faster-is-slower effect with a simplified phenomenological model
NASA Astrophysics Data System (ADS)
Suzuno, K.; Tomoeda, A.; Ueyama, D.
2013-11-01
We investigate the mechanism of the phenomenon called the “faster-is-slower”effect in pedestrian flow studies analytically with a simplified phenomenological model. It is well known that the flow rate is maximized at a certain strength of the driving force in simulations using the social force model when we consider the discharge of self-driven particles through a bottleneck. In this study, we propose a phenomenological and analytical model based on a mechanics-based modeling to reveal the mechanism of the phenomenon. We show that our reduced system, with only a few degrees of freedom, still has similar properties to the original many-particle system and that the effect comes from the competition between the driving force and the nonlinear friction from the model. Moreover, we predict the parameter dependences on the effect from our model qualitatively, and they are confirmed numerically by using the social force model.
Modeling particle nucleation and growth over northern California during the 2010 CARES campaign
NASA Astrophysics Data System (ADS)
Lupascu, A.; Easter, R.; Zaveri, R.; Shrivastava, M.; Pekour, M.; Tomlinson, J.; Yang, Q.; Matsui, H.; Hodzic, A.; Zhang, Q.; Fast, J. D.
2015-11-01
Accurate representation of the aerosol lifecycle requires adequate modeling of the particle number concentration and size distribution in addition to their mass, which is often the focus of aerosol modeling studies. This paper compares particle number concentrations and size distributions as predicted by three empirical nucleation parameterizations in the Weather Research and Forecast coupled with chemistry (WRF-Chem) regional model using 20 discrete size bins ranging from 1 nm to 10 μm. Two of the parameterizations are based on H2SO4, while one is based on both H2SO4 and organic vapors. Budget diagnostic terms for transport, dry deposition, emissions, condensational growth, nucleation, and coagulation of aerosol particles have been added to the model and are used to analyze the differences in how the new particle formation parameterizations influence the evolving aerosol size distribution. The simulations are evaluated using measurements collected at surface sites and from a research aircraft during the Carbonaceous Aerosol and Radiative Effects Study (CARES) conducted in the vicinity of Sacramento, California. While all three parameterizations captured the temporal variation of the size distribution during observed nucleation events as well as the spatial variability in aerosol number, all overestimated by up to a factor of 2.5 the total particle number concentration for particle diameters greater than 10 nm. Using the budget diagnostic terms, we demonstrate that the combined H2SO4 and low-volatility organic vapor parameterization leads to a different diurnal variability of new particle formation and growth to larger sizes compared to the parameterizations based on only H2SO4. At the CARES urban ground site, peak nucleation rates are predicted to occur around 12:00 Pacific (local) standard time (PST) for the H2SO4 parameterizations, whereas the highest rates were predicted at 08:00 and 16:00 PST when low-volatility organic gases are included in the parameterization. This can be explained by higher anthropogenic emissions of organic vapors at these times as well as lower boundary-layer heights that reduce vertical mixing. The higher nucleation rates in the H2SO4-organic parameterization at these times were largely offset by losses due to coagulation. Despite the different budget terms for ultrafine particles, the 10-40 nm diameter particle number concentrations from all three parameterizations increased from 10:00 to 14:00 PST and then decreased later in the afternoon, consistent with changes in the observed size and number distribution. We found that newly formed particles could explain up to 20-30 % of predicted cloud condensation nuclei at 0.5 % supersaturation, depending on location and the specific nucleation parameterization. A sensitivity simulation using 12 discrete size bins ranging from 1 nm to 10 μm diameter gave a reasonable estimate of particle number and size distribution compared to the 20 size bin simulation, while reducing the associated computational cost by ~ 36 %.
NASA Astrophysics Data System (ADS)
Weber, R. J.; Guo, H.; Russell, A. G.; Nenes, A.
2015-12-01
pH is a critical aerosol property that impacts many atmospheric processes, including biogenic secondary organic aerosol formation, gas-particle phase partitioning, and mineral dust or redox metal mobilization. Particle pH has also been linked to adverse health effects. Using a comprehensive data set from the Southern Oxidant and Aerosol Study (SOAS) as the basis for thermodynamic modeling, we have shown that particles are currently highly acidic in the southeastern US, with pH between 0 and 2. Sulfate and ammonium are the main acid-base components that determine particle pH in this region, however they have different sources and their concentrations are changing. Over 15 years of network data show that sulfur dioxide emission reductions have resulted in a roughly 70 percent decrease in sulfate, whereas ammonia emissions, mainly link to agricultural activities, have been largely steady, as have gas phase ammonia concentrations. This has led to the view that particles are becoming more neutralized. However, sensitivity analysis, based on thermodynamic modeling, to changing sulfate concentrations indicates that particles have remained highly acidic over the past decade, despite the large reductions in sulfate. Furthermore, anticipated continued reductions of sulfate and relatively constant ammonia emissions into the future will not significantly change particle pH until sulfate drops to clean continental background levels. The result reshapes our expectation of future particle pH and implies that atmospheric processes and adverse health effects linked to particle acidity will remain unchanged for some time into the future.
Localisation in a Growth Model with Interaction
NASA Astrophysics Data System (ADS)
Costa, M.; Menshikov, M.; Shcherbakov, V.; Vachkovskaia, M.
2018-05-01
This paper concerns the long term behaviour of a growth model describing a random sequential allocation of particles on a finite cycle graph. The model can be regarded as a reinforced urn model with graph-based interaction. It is motivated by cooperative sequential adsorption, where adsorption rates at a site depend on the configuration of existing particles in the neighbourhood of that site. Our main result is that, with probability one, the growth process will eventually localise either at a single site, or at a pair of neighbouring sites.
SABRINA - An interactive geometry modeler for MCNP (Monte Carlo Neutron Photon)
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T.; Murphy, J.
SABRINA is an interactive three-dimensional geometry modeler developed to produce complicated models for the Los Alamos Monte Carlo Neutron Photon program MCNP. SABRINA produces line drawings and color-shaded drawings for a wide variety of interactive graphics terminals. It is used as a geometry preprocessor in model development and as a Monte Carlo particle-track postprocessor in the visualization of complicated particle transport problem. SABRINA is written in Fortran 77 and is based on the Los Alamos Common Graphics System, CGS. 5 refs., 2 figs.
Localisation in a Growth Model with Interaction
NASA Astrophysics Data System (ADS)
Costa, M.; Menshikov, M.; Shcherbakov, V.; Vachkovskaia, M.
2018-06-01
This paper concerns the long term behaviour of a growth model describing a random sequential allocation of particles on a finite cycle graph. The model can be regarded as a reinforced urn model with graph-based interaction. It is motivated by cooperative sequential adsorption, where adsorption rates at a site depend on the configuration of existing particles in the neighbourhood of that site. Our main result is that, with probability one, the growth process will eventually localise either at a single site, or at a pair of neighbouring sites.
Long-wavelength instabilities in a system of interacting active particles
NASA Astrophysics Data System (ADS)
Fazli, Zahra; Najafi, Ali
2018-02-01
Based on a microscopic model, we develop a continuum description for a suspension of microscopic self-propelled particles. With this continuum description we study the role of long-range interactions in destabilizing macroscopic ordered phases that are developed by short-range interactions. Long-wavelength fluctuations can destabilize both isotropic and symmetry-broken polar phases in a suspension of dipolar particles. The instabilities in a suspension of pullers (pushers) arise from splay (bend) fluctuations. Such instabilities are not seen in a suspension of quadrupolar particles.
Modeling of multiphase flow with solidification and chemical reaction in materials processing
NASA Astrophysics Data System (ADS)
Wei, Jiuan
Understanding of multiphase flow and related heat transfer and chemical reactions are the keys to increase the productivity and efficiency in industrial processes. The objective of this thesis is to utilize the computational approaches to investigate the multiphase flow and its application in the materials processes, especially in the following two areas: directional solidification, and pyrolysis and synthesis. In this thesis, numerical simulations will be performed for crystal growth of several III-V and II-VI compounds. The effects of Prandtl and Grashof numbers on the axial temperature profile, the solidification interface shape, and melt flow are investigated. For the material with high Prandtl and Grashof numbers, temperature field and growth interface will be significantly influenced by melt flow, resulting in the complicated temperature distribution and curved interface shape, so it will encounter tremendous difficulty using a traditional Bridgman growth system. A new design is proposed to reduce the melt convection. The geometric configuration of top cold and bottom hot in the melt will dramatically reduce the melt convection. The new design has been employed to simulate the melt flow and heat transfer in crystal growth with large Prandtl and Grashof numbers and the design parameters have been adjusted. Over 90% of commercial solar cells are made from silicon and directional solidification system is the one of the most important method to produce multi-crystalline silicon ingots due to its tolerance to feedstock impurities and lower manufacturing cost. A numerical model is developed to simulate the silicon ingot directional solidification process. Temperature distribution and solidification interface location are presented. Heat transfer and solidification analysis are performed to determine the energy efficiency of the silicon production furnace. Possible improvements are identified. The silicon growth process is controlled by adjusting heating power and moving the side insulation layer upward. It is possible to produce high quality crystal with a good combination of heating and cooling. SiC based ceramic materials fabricated by polymer pyrolysis and synthesis becomes a promising candidate for nuclear applications. To obtain high uniformity of microstructure/concentration fuel without crack at high operating temperature, it is important to understand transport phenomena in material processing at different scale levels. In our prior work, a system level model based on reactive porous media theory was developed to account for the pyrolysis process in uranium-ceramic nuclear fabrication In this thesis, a particle level mesoscopic model based on the Smoothed Particle Hydrodynamics (SPH) is developed for modeling the synthesis of filler U3O8 particles and SiC matrix. The system-level model provides the thermal boundary conditions needed in the particle level simulation. The evolution of particle concentration and structure as well as composition of composite produced will be investigated. Since the process temperature and heat flux play the important roles in material quality and uniformity, the effects of heating rate at different directions, filler particle size and distribution on uniformity and microstructure of the final product are investigated. Uncertainty issue is also discussed. For the multiphase flow with directional solidification, a system level based on FVM is established. In this model, melt convection, temperature distribution, phase change and solidification interface can be investigated. For the multiphase flow with chemical reaction, a particle level model based on SPH method is developed to describe the pyrolysis and synthesis process of uranium-ceramic nuclear fuel. Due to its mesh-free nature, SPH can easily handle the problems with multi phases and components, large deformation, chemical reactions and even solidifications. A multi-scale meso-macroscopic approach, which combine a mesoscopic model based on SPH method and macroscopic model based on FVM, FEM and FDM, can be applied to even more complicated system. In the mesoscopic model by SPH method, some fundamental mesoscopic phenomena, such as the microstructure evolution, interface morphology represented by high resolution, particle entrapment in solidification can be studied. In the macroscopic model, the heat transfer, fluid flow, species transport can be modeled, and the simulation results provided the velocity, temperature and species boundary condition necessary for the mesoscopic model. This part falls into the region of future work. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Noh, S. J.; Tachikawa, Y.; Shiiba, M.; Yorozu, K.; Kim, S.
2012-04-01
Data assimilation methods have received increased attention to accomplish uncertainty assessment and enhancement of forecasting capability in various areas. Despite of their potentials, applicable software frameworks to probabilistic approaches and data assimilation are still limited because the most of hydrologic modeling software are based on a deterministic approach. In this study, we developed a hydrological modeling framework for sequential data assimilation, so called MPI-OHyMoS. MPI-OHyMoS allows user to develop his/her own element models and to easily build a total simulation system model for hydrological simulations. Unlike process-based modeling framework, this software framework benefits from its object-oriented feature to flexibly represent hydrological processes without any change of the main library. Sequential data assimilation based on the particle filters is available for any hydrologic models based on MPI-OHyMoS considering various sources of uncertainty originated from input forcing, parameters and observations. The particle filters are a Bayesian learning process in which the propagation of all uncertainties is carried out by a suitable selection of randomly generated particles without any assumptions about the nature of the distributions. In MPI-OHyMoS, ensemble simulations are parallelized, which can take advantage of high performance computing (HPC) system. We applied this software framework for short-term streamflow forecasting of several catchments in Japan using a distributed hydrologic model. Uncertainty of model parameters and remotely-sensed rainfall data such as X-band or C-band radar is estimated and mitigated in the sequential data assimilation.
Adaptive particle filter for robust visual tracking
NASA Astrophysics Data System (ADS)
Dai, Jianghua; Yu, Shengsheng; Sun, Weiping; Chen, Xiaoping; Xiang, Jinhai
2009-10-01
Object tracking plays a key role in the field of computer vision. Particle filter has been widely used for visual tracking under nonlinear and/or non-Gaussian circumstances. In particle filter, the state transition model for predicting the next location of tracked object assumes the object motion is invariable, which cannot well approximate the varying dynamics of the motion changes. In addition, the state estimate calculated by the mean of all the weighted particles is coarse or inaccurate due to various noise disturbances. Both these two factors may degrade tracking performance greatly. In this work, an adaptive particle filter (APF) with a velocity-updating based transition model (VTM) and an adaptive state estimate approach (ASEA) is proposed to improve object tracking. In APF, the motion velocity embedded into the state transition model is updated continuously by a recursive equation, and the state estimate is obtained adaptively according to the state posterior distribution. The experiment results show that the APF can increase the tracking accuracy and efficiency in complex environments.
Rapid Frequency Chirps of TAE mode due to Finite Orbit Energetic Particles
NASA Astrophysics Data System (ADS)
Berk, Herb; Wang, Ge
2013-10-01
The tip model for the TAE mode in the large aspect ratio limit, conceived by Rosenbluth et al. in the frequency domain, together with an interaction term in the frequency domain based on a map model, has been extended into the time domain. We present the formal basis for the model, starting with the Lagrangian for the particle wave interaction. We shall discuss the formal nonlinear time domain problem and the procedure that needs to obtain solutions in the adiabatic limit.
NASA Astrophysics Data System (ADS)
Akridis, Petros; Rigopoulos, Stelios
2017-01-01
A discretised population balance equation (PBE) is coupled with an in-house computational fluid dynamics (CFD) code in order to model soot formation in laminar diffusion flames. The unsteady Navier-Stokes, species and enthalpy transport equations and the spatially-distributed discretised PBE for the soot particles are solved in a coupled manner, together with comprehensive gas-phase chemistry and an optically thin radiation model, thus yielding the complete particle size distribution of the soot particles. Nucleation, surface growth and oxidation are incorporated into the PBE using an acetylene-based soot model. The potential of the proposed methodology is investigated by comparing with experimental results from the Santoro jet burner [Santoro, Semerjian and Dobbins, Soot particle measurements in diffusion flames, Combustion and Flame, Vol. 51 (1983), pp. 203-218; Santoro, Yeh, Horvath and Semerjian, The transport and growth of soot particles in laminar diffusion flames, Combustion Science and Technology, Vol. 53 (1987), pp. 89-115] for three laminar axisymmetric non-premixed ethylene flames: a non-smoking, an incipient smoking and a smoking flame. Overall, good agreement is observed between the numerical and the experimental results.
Money, Eric S; Barton, Lauren E; Dawson, Joseph; Reckhow, Kenneth H; Wiesner, Mark R
2014-03-01
The adaptive nature of the Forecasting the Impacts of Nanomaterials in the Environment (FINE) Bayesian network is explored. We create an updated FINE model (FINEAgNP-2) for predicting aquatic exposure concentrations of silver nanoparticles (AgNP) by combining the expert-based parameters from the baseline model established in previous work with literature data related to particle behavior, exposure, and nano-ecotoxicology via parameter learning. We validate the AgNP forecast from the updated model using mesocosm-scale field data and determine the sensitivity of several key variables to changes in environmental conditions, particle characteristics, and particle fate. Results show that the prediction accuracy of the FINEAgNP-2 model increased approximately 70% over the baseline model, with an error rate of only 20%, suggesting that FINE is a reliable tool to predict aquatic concentrations of nano-silver. Sensitivity analysis suggests that fractal dimension, particle diameter, conductivity, time, and particle fate have the most influence on aquatic exposure given the current knowledge; however, numerous knowledge gaps can be identified to suggest further research efforts that will reduce the uncertainty in subsequent exposure and risk forecasts. Copyright © 2013 Elsevier B.V. All rights reserved.
Search for charged massive long-lived particles at D0
NASA Astrophysics Data System (ADS)
Xie, Yunhe
2009-05-01
We report on a new search for charged massive long-lived particles (CMLLP) by the D0 Experiment at Fermilab's Teva- tron. CMLLP are predicted in many theories beyond Standard Model. Time-of-flight information was used in the search for pair-produced CMLLPs, based on the signature of two particles, reconstructed as muons, with speed and invariant mass inconsistent with beam-produced muons. The analysis was done with the data taken by D0 detector in Run II cor- responding to an integrated luminosity of 3 fb-1. Limits on the pair production of CMLLPs are presented quasi-model independently.
Particle filters, a quasi-Monte-Carlo-solution for segmentation of coronaries.
Florin, Charles; Paragios, Nikos; Williams, Jim
2005-01-01
In this paper we propose a Particle Filter-based approach for the segmentation of coronary arteries. To this end, successive planes of the vessel are modeled as unknown states of a sequential process. Such states consist of the orientation, position, shape model and appearance (in statistical terms) of the vessel that are recovered in an incremental fashion, using a sequential Bayesian filter (Particle Filter). In order to account for bifurcations and branchings, we consider a Monte Carlo sampling rule that propagates in parallel multiple hypotheses. Promising results on the segmentation of coronary arteries demonstrate the potential of the proposed approach.
Conditions for successful data assimilation
NASA Astrophysics Data System (ADS)
Morzfeld, M.; Chorin, A. J.
2013-12-01
Many applications in science and engineering require that the predictions of uncertain models be updated by information from a stream of noisy data. The model and the data jointly define a conditional probability density function (pdf), which contains all the information one has about the process of interest and various numerical methods can be used to study and approximate this pdf, e.g. the Kalman filter, variational methods or particle filters. Given a model and data, each of these algorithms will produce a result. We are interested in the conditions under which this result is reasonable, i.e. consistent with the real-life situation one is modeling. In particular, we show, using idealized models, that numerical data assimilation is feasible in principle only if a suitably defined effective dimension of the problem is not excessive. This effective dimension depends on the noise in the model and the data, and in physically reasonable problems it can be moderate even when the number of variables is huge. In particular, we find that the effective dimension being moderate induces a balance condition between the noises in the model and the data; this balance condition is often satisfied in realistic applications or else the noise levels are excessive and drown the underlying signal. We also study the effects of the effective dimension on particle filters in two instances, one in which the importance function is based on the model alone, and one in which it is based on both the model and the data. We have three main conclusions: (1) the stability (i.e., non-collapse of weights) in particle filtering depends on the effective dimension of the problem. Particle filters can work well if the effective dimension is moderate even if the true dimension is large (which we expect to happen often in practice). (2) A suitable choice of importance function is essential, or else particle filtering fails even when data assimilation is feasible in principle with a sequential algorithm. (3) There is a parameter range in which the model noise and the observation noise are roughly comparable, and in which even the optimal particle filter collapses, even under ideal circumstances. We further study the role of the effective dimension in variational data assimilation and particle smoothing, for both the weak and strong constraint problem. It was found that these methods too require a moderate effective dimension or else no accurate predictions can be expected. Moreover, variational data assimilation or particle smoothing may be applicable in the parameter range where particle filtering fails, because the use of more than one consecutive data set helps reduce the variance which is responsible for the collapse of the filters.
Composite Pseudoclassical Models of Quarks
NASA Astrophysics Data System (ADS)
Musin, Yu. R.
2018-05-01
Composite models of quarks are proposed, analogous to composite models of leptons. A model-based explanation of the appearance of generations of fundamental particles in the Standard Model is given. New empirical formulas are proposed for the quark masses, modifying Barut's well-known formula.
Numerical evaluation of a single ellipsoid motion in Newtonian and power-law fluids
NASA Astrophysics Data System (ADS)
Férec, Julien; Ausias, Gilles; Natale, Giovanniantonio
2018-05-01
A computational model is developed for simulating the motion of a single ellipsoid suspended in a Newtonian and power-law fluid, respectively. Based on a finite element method (FEM), the approach consists in seeking solutions for the linear and angular particle velocities using a minimization algorithm, such that the net hydrodynamic force and torque acting on the ellipsoid are zero. For a Newtonian fluid subjected to a simple shear flow, the Jeffery's predictions are recovered at any aspect ratios. The motion of a single ellipsoidal fiber is found to be slightly disturbed by the shear-thinning character of the suspending fluid, when compared with the Jeffery's solutions. Surprisingly, the perturbation can be completely neglected for a particle with a large aspect ratio. Furthermore, the particle centroid is also found to translate with the same linear velocity as the undisturbed simple shear flow evaluated at particle centroid. This is confirmed by recent works based on experimental investigations and modeling approach (1-2).