NASA Technical Reports Server (NTRS)
Perkins, Hugh Douglas
2010-01-01
In order to improve the understanding of particle vitiation effects in hypersonic propulsion test facilities, a quasi-one dimensional numerical tool was developed to efficiently model reacting particle-gas flows over a wide range of conditions. Features of this code include gas-phase finite-rate kinetics, a global porous-particle combustion model, mass, momentum and energy interactions between phases, and subsonic and supersonic particle drag and heat transfer models. The basic capabilities of this tool were validated against available data or other validated codes. To demonstrate the capabilities of the code a series of computations were performed for a model hypersonic propulsion test facility and scramjet. Parameters studied were simulated flight Mach number, particle size, particle mass fraction and particle material.
NASA Technical Reports Server (NTRS)
Jordan, F. L., Jr.
1980-01-01
As part of basic research to improve aerial applications technology, methods were developed at the Langley Vortex Research Facility to simulate and measure deposition patterns of aerially-applied sprays and granular materials by means of tests with small-scale models of agricultural aircraft and dynamically-scaled test particles. Interactions between the aircraft wake and the dispersed particles are being studied with the objective of modifying wake characteristics and dispersal techniques to increase swath width, improve deposition pattern uniformity, and minimize drift. The particle scaling analysis, test methods for particle dispersal from the model aircraft, visualization of particle trajectories, and measurement and computer analysis of test deposition patterns are described. An experimental validation of the scaling analysis and test results that indicate improved control of chemical drift by use of winglets are presented to demonstrate test methods.
Computer modeling of test particle acceleration at oblique shocks
NASA Technical Reports Server (NTRS)
Decker, Robert B.
1988-01-01
The present evaluation of the basic techniques and illustrative results of charged particle-modeling numerical codes suitable for particle acceleration at oblique, fast-mode collisionless shocks emphasizes the treatment of ions as test particles, calculating particle dynamics through numerical integration along exact phase-space orbits. Attention is given to the acceleration of particles at planar, infinitessimally thin shocks, as well as to plasma simulations in which low-energy ions are injected and accelerated at quasi-perpendicular shocks with internal structure.
Sonntag, Darrell B; Gao, H Oliver; Holmén, Britt A
2008-08-01
A linear mixed model was developed to quantify the variability of particle number emissions from transit buses tested in real-world driving conditions. Two conventional diesel buses and two hybrid diesel-electric buses were tested throughout 2004 under different aftertreatments, fuels, drivers, and bus routes. The mixed model controlled the confounding influence of factors inherent to on-board testing. Statistical tests showed that particle number emissions varied significantly according to the after treatment, bus route, driver, bus type, and daily temperature, with only minor variability attributable to differences between fuel types. The daily setup and operation of the sampling equipment (electrical low pressure impactor) and mini-dilution system contributed to 30-84% of the total random variability of particle measurements among tests with diesel oxidation catalysts. By controlling for the sampling day variability, the model better defined the differences in particle emissions among bus routes. In contrast, the low particle number emissions measured with diesel particle filters (decreased by over 99%) did not vary according to operating conditions or bus type but did vary substantially with ambient temperature.
Numerical Analysis of Mixed-Phase Icing Cloud Simulations in the NASA Propulsion Systems Laboratory
NASA Technical Reports Server (NTRS)
Bartkus, Tadas; Tsao, Jen-Ching; Struk, Peter; Van Zante, Judith
2017-01-01
This presentation describes the development of a numerical model that couples the thermal interaction between ice particles, water droplets, and the flowing gas of an icing wind tunnel for simulation of NASA Glenn Research Centers Propulsion Systems Laboratory (PSL). The ultimate goal of the model is to better understand the complex interactions between the test parameters and have greater confidence in the conditions at the test section of the PSL tunnel. The model attempts to explain the observed changes in test conditions by coupling the conservation of mass and energy equations for both the cloud particles and flowing gas mass. Model predictions were compared to measurements taken during May 2015 testing at PSL, where test conditions varied gas temperature, pressure, velocity and humidity levels, as well as the cloud total water content, particle initial temperature, and particle size distribution.
Numerical Analysis of Mixed-Phase Icing Cloud Simulations in the NASA Propulsion Systems Laboratory
NASA Technical Reports Server (NTRS)
Bartkus, Tadas P.; Tsao, Jen-Ching; Struk, Peter M.; Van Zante, Judith F.
2017-01-01
This paper describes the development of a numerical model that couples the thermal interaction between ice particles, water droplets, and the flowing gas of an icing wind tunnel for simulation of NASA Glenn Research Centers Propulsion Systems Laboratory (PSL). The ultimate goal of the model is to better understand the complex interactions between the test parameters and have greater confidence in the conditions at the test section of the PSL tunnel. The model attempts to explain the observed changes in test conditions by coupling the conservation of mass and energy equations for both the cloud particles and flowing gas mass. Model predictions were compared to measurements taken during May 2015 testing at PSL, where test conditions varied gas temperature, pressure, velocity and humidity levels, as well as the cloud total water content, particle initial temperature, and particle size distribution.
Heating and Acceleration of Charged Particles by Weakly Compressible Magnetohydrodynamic Turbulence
NASA Astrophysics Data System (ADS)
Lynn, Jacob William
We investigate the interaction between low-frequency magnetohydrodynamic (MHD) turbulence and a distribution of charged particles. Understanding this physics is central to understanding the heating of the solar wind, as well as the heating and acceleration of other collisionless plasmas. Our central method is to simulate weakly compressible MHD turbulence using the Athena code, along with a distribution of test particles which feel the electromagnetic fields of the turbulence. We also construct analytic models of transit-time damping (TTD), which results from the mirror force caused by compressible (fast or slow) MHD waves. Standard linear-theory models in the literature require an exact resonance between particle and wave velocities to accelerate particles. The models developed in this thesis go beyond standard linear theory to account for the fact that wave-particle interactions decorrelate over a short time, which allows particles with velocities off resonance to undergo acceleration and velocity diffusion. We use the test particle simulation results to calibrate and distinguish between different models for this velocity diffusion. Test particle heating is larger than the linear theory prediction, due to continued acceleration of particles with velocities off-resonance. We also include an artificial pitch-angle scattering to the test particle motion, representing the effect of high-frequency waves or velocity-space instabilities. For low scattering rates, we find that the scattering enforces isotropy and enhances heating by a modest factor. For much higher scattering rates, the acceleration is instead due to a non-resonant effect, as particles "frozen" into the fluid adiabatically gain and lose energy as eddies expand and contract. Lastly, we generalize our calculations to allow for relativistic test particles. Linear theory predicts that relativistic particles with velocities much higher than the speed of waves comprising the turbulence would undergo no acceleration; resonance-broadening modifies this conclusion and allows for a continued Fermi-like acceleration process. This may affect the observed spectra of black hole accretion disks by accelerating relativistic particles into a quasi-powerlaw tail.
NASA Astrophysics Data System (ADS)
Conny, Joseph M.; Ortiz-Montalvo, Diana L.
2017-09-01
We show the effect of composition heterogeneity and shape on the optical properties of urban dust particles based on the three-dimensional spatial and optical modeling of individual particles. Using scanning electron microscopy/energy-dispersive X-ray spectroscopy (SEM/EDX) and focused ion beam (FIB) tomography, spatial models of particles collected in Los Angeles and Seattle accounted for surface features, inclusions, and voids, as well as overall composition and shape. Using voxel data from the spatial models and the discrete dipole approximation method, we report extinction efficiency, asymmetry parameter, and single-scattering albedo (SSA). Test models of the particles involved (1) the particle's actual morphology as a single homogeneous phase and (2) simple geometric shapes (spheres, cubes, and tetrahedra) depicting composition homogeneity or heterogeneity (with multiple spheres). Test models were compared with a reference model, which included the particle's actual morphology and heterogeneity based on SEM/EDX and FIB tomography. Results show particle shape to be a more important factor for determining extinction efficiency than accounting for individual phases in a particle, regardless of whether absorption or scattering dominated. In addition to homogeneous models with the particles' actual morphology, tetrahedral geometric models provided better extinction accuracy than spherical or cubic models. For iron-containing heterogeneous particles, the asymmetry parameter and SSA varied with the composition of the iron-containing phase, even if the phase was <10% of the particle volume. For particles containing loosely held phases with widely varying refractive indexes (i.e., exhibiting "severe" heterogeneity), only models that account for heterogeneity may sufficiently determine SSA.
Rong, Guan; Liu, Guang; Zhou, Chuang-bing
2013-01-01
Since rocks are aggregates of mineral particles, the effect of mineral microstructure on macroscopic mechanical behaviors of rocks is inneglectable. Rock samples of four different particle shapes are established in this study based on clumped particle model, and a sphericity index is used to quantify particle shape. Model parameters for simulation in PFC are obtained by triaxial compression test of quartz sandstone, and simulation of triaxial compression test is then conducted on four rock samples with different particle shapes. It is seen from the results that stress thresholds of rock samples such as crack initiation stress, crack damage stress, and peak stress decrease with the increasing of the sphericity index. The increase of sphericity leads to a drop of elastic modulus and a rise in Poisson ratio, while the decreasing sphericity usually results in the increase of cohesion and internal friction angle. Based on volume change of rock samples during simulation of triaxial compression test, variation of dilation angle with plastic strain is also studied. PMID:23997677
Rong, Guan; Liu, Guang; Hou, Di; Zhou, Chuang-Bing
2013-01-01
Since rocks are aggregates of mineral particles, the effect of mineral microstructure on macroscopic mechanical behaviors of rocks is inneglectable. Rock samples of four different particle shapes are established in this study based on clumped particle model, and a sphericity index is used to quantify particle shape. Model parameters for simulation in PFC are obtained by triaxial compression test of quartz sandstone, and simulation of triaxial compression test is then conducted on four rock samples with different particle shapes. It is seen from the results that stress thresholds of rock samples such as crack initiation stress, crack damage stress, and peak stress decrease with the increasing of the sphericity index. The increase of sphericity leads to a drop of elastic modulus and a rise in Poisson ratio, while the decreasing sphericity usually results in the increase of cohesion and internal friction angle. Based on volume change of rock samples during simulation of triaxial compression test, variation of dilation angle with plastic strain is also studied.
Numerical study of the current sheet and PSBL in a magnetotail model
NASA Technical Reports Server (NTRS)
Doxas, I.; Horton, W.; Sandusky, K.; Tajima, T.; Steinolfson, R.
1989-01-01
The current sheet and plasma sheet boundary layer (PSBL) in a magnetotail model are discussed. A test particle code is used to study the response of ensembles of particles to a two-dimensional, time-dependent model of the geomagnetic tail, and test the proposition (Coroniti, 1985a, b; Buchner and Zelenyi, 1986; Chen and Palmadesso, 1986; Martin, 1986) that the stochasticity of the particle orbits in these fields is an important part of the physical mechanism for magnetospheric substorms. The realistic results obtained for the fluid moments of the particle distribution with this simple model, and their insensitivity to initial conditions, is consistent with this hypothesis.
Asgharian, B; Price, O T; Oldham, M; Chen, Lung-Chi; Saunders, E L; Gordon, T; Mikheev, V B; Minard, K R; Teeguarden, J G
2014-12-01
Comparing effects of inhaled particles across rodent test systems and between rodent test systems and humans is a key obstacle to the interpretation of common toxicological test systems for human risk assessment. These comparisons, correlation with effects and prediction of effects, are best conducted using measures of tissue dose in the respiratory tract. Differences in lung geometry, physiology and the characteristics of ventilation can give rise to differences in the regional deposition of particles in the lung in these species. Differences in regional lung tissue doses cannot currently be measured experimentally. Regional lung tissue dosimetry can however be predicted using models developed for rats, monkeys, and humans. A computational model of particle respiratory tract deposition and clearance was developed for BALB/c and B6C3F1 mice, creating a cross-species suite of available models for particle dosimetry in the lung. Airflow and particle transport equations were solved throughout the respiratory tract of these mice strains to obtain temporal and spatial concentration of inhaled particles from which deposition fractions were determined. Particle inhalability (Inhalable fraction, IF) and upper respiratory tract (URT) deposition were directly related to particle diffusive and inertial properties. Measurements of the retained mass at several post-exposure times following exposure to iron oxide nanoparticles, micro- and nanoscale C60 fullerene, and nanoscale silver particles were used to calibrate and verify model predictions of total lung dose. Interstrain (mice) and interspecies (mouse, rat and human) differences in particle inhalability, fractional deposition and tissue dosimetry are described for ultrafine, fine and coarse particles.
Injection Efficiency of Low-energy Particles at Oblique Shocks with a Focused Transport Model
NASA Astrophysics Data System (ADS)
Zuo, P.; Zhang, M.; Rassoul, H.
2013-12-01
There is strong evidence that a small portion of thermal and suprathermal particles from hot coronal material or remnants of previous solar energetic particle (SEP) events serve as the source of large SEP events (Desai et al. 2006). To build more powerful SEP models, it is necessary to model the detailed particle injection and acceleration process for source particles especially at lower energies. We present a test particle simulation on the injection and acceleration of low-energy suprathermal particles by Laminar nonrelativistic oblique shocks in the framework of the focused transport theory, which is proved to contain all necessary physics of shock acceleration, but avoid the limitation of diffusive shock acceleration (DSA). The injection efficiency as a function of Mach number, obliquity, injection speed, shock strength, cross-shock potential and the degree of turbulence is calculated. This test particle simulation proves that the focused transport theory is an extension of DSA theory with the capability of predicting the efficiency of particle injection. The results can be applied to modeling the SEP acceleration from source particles.
State estimation and prediction using clustered particle filters.
Lee, Yoonsang; Majda, Andrew J
2016-12-20
Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors.
State estimation and prediction using clustered particle filters
Lee, Yoonsang; Majda, Andrew J.
2016-01-01
Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors. PMID:27930332
Bell Test experiments explained without entanglement
NASA Astrophysics Data System (ADS)
Boyd, Jeffrey
2011-04-01
by Jeffrey H. Boyd. Jeffreyhboyd@gmail.com. John Bell proposed a test of what was called "local realism." However that is a different view of reality than we hold. Bell incorrectly assumed the validity of wave particle dualism. According to our model waves are independent of particles; wave interference precedes the emission of a particle. This results in two conclusions. First the proposed inequalities that apply to "local realism" in Bell's theorem do not apply to this model. The alleged mathematics of "local realism" is therefore wrong. Second, we can explain the Bell Test experimental results (such as the experiments done at Innsbruck) without any need for entanglement, non-locality, or particle superposition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xi, Jinxiang; Kim, JongWon; Si, Xiuhua A.
Rodents are routinely used in inhalation toxicology tests as human surrogates. However, in vitro dosimetry tests in rodent casts are still scarce due to small rodent airways and in vitro tests to quantify sub-regional dosimetry are still impractical. We hypothesized that for inertial particles whose deposition is dominated by airflow convection (Reynolds number) and particle inertia (Stokes number), the deposition should be similar among airway replicas of different scales if their Reynolds and Stokes numbers are kept the same. In this study, we aimed to (1) numerically test the hypothesis in three airway geometries: a USP induction port, a humanmore » nose model, and a Sprague-Dawley rat nose model, and (2) find the range of applicability of this hypothesis. Five variants of the USP and human nose models and three variants of the rat nose model were tested. Inhalation rates and particle sizes were scaled to match the Reynolds number and Stokes numbers. A low-Reynolds-number k–ω model was used to resolve the airflow and a Lagrangian tracking algorithm was used to simulate the particle transport and deposition. Statistical analysis of predicted doses was conducted using ANOVA. For normal inhalation rates and particle dia- meters ranging from 0.5 to 24 mm, the deposition differences between the life-size and scaled models are insignificant for all airway geometries considered (i.e., human nose, USP, and rat nose). Furthermore, the deposition patterns and exit particle profiles also look similar among scaled models. However, deposition rates and patterns start to deviate if inhalation rates are too low, or particle sizes are too large. For the rat nose, the threshold velocity was found to be 0.71 m/s and the threshold Froude number to be 50. Results of this study provide a theoretical foundation for sub-regional in vitro dosimetry tests in small animals and for interpretation of data from inter-species or intra-species with varying body sizes.« less
Rengasamy, Samy; Miller, Adam; Eimer, Benjamin C
2011-01-01
N95 particulate filtering facepiece respirators are certified by measuring penetration levels photometrically with a presumed severe case test method using charge neutralized NaCl aerosols at 85 L/min. However, penetration values obtained by photometric methods have not been compared with count-based methods using contemporary respirators composed of electrostatic filter media and challenged with both generated and ambient aerosols. To better understand the effects of key test parameters (e.g., particle charge, detection method), initial penetration levels for five N95 model filtering facepiece respirators were measured using NaCl aerosols with the aerosol challenge and test equipment employed in the NIOSH respirator certification method (photometric) and compared with an ultrafine condensation particle counter method (count based) for the same NaCl aerosols as well as for ambient room air particles. Penetrations using the NIOSH test method were several-fold less than the penetrations obtained by the ultrafine condensation particle counter for NaCl aerosols as well as for room particles indicating that penetration measurement based on particle counting offers a more difficult challenge than the photometric method, which lacks sensitivity for particles < 100 nm. All five N95 models showed the most penetrating particle size around 50 nm for room air particles with or without charge neutralization, and at 200 nm for singly charged NaCl monodisperse particles. Room air with fewer charged particles and an overwhelming number of neutral particles contributed to the most penetrating particle size in the 50 nm range, indicating that the charge state for the majority of test particles determines the MPPS. Data suggest that the NIOSH respirator certification protocol employing the photometric method may not be a more challenging aerosol test method. Filter penetrations can vary among workplaces with different particle size distributions, which suggests the need for the development of new or revised "more challenging" aerosol test methods for NIOSH certification of respirators.
On-road heavy-duty diesel particulate matter emissions modeled using chassis dynamometer data.
Kear, Tom; Niemeier, D A
2006-12-15
This study presents a model, derived from chassis dynamometer test data, for factors (operational correction factors, or OCFs) that correct (g/mi) heavy-duty diesel particle emission rates measured on standard test cycles for real-world conditions. Using a random effects mixed regression model with data from 531 tests of 34 heavy-duty vehicles from the Coordinating Research Council's E55/E59 research project, we specify a model with covariates that characterize high power transient driving, time spent idling, and average speed. Gram per mile particle emissions rates were negatively correlated with high power transient driving, average speed, and time idling. The new model is capable of predicting relative changes in g/mi on-road heavy-duty diesel particle emission rates for real-world driving conditions that are not reflected in the driving cycles used to test heavy-duty vehicles.
Reponen, Tiina; Lee, Shu-An; Grinshpun, Sergey A; Johnson, Erik; McKay, Roy
2011-04-01
This study investigated particle-size-selective protection factors (PFs) of four models of N95 filtering facepiece respirators (FFRs) that passed and failed fit testing. Particle size ranges were representative of individual viruses and bacteria (aerodynamic diameter d(a) = 0.04-1.3 μm). Standard respirator fit testing was followed by particle-size-selective measurement of PFs while subjects wore N95 FFRs in a test chamber. PF values obtained for all subjects were then compared to those obtained for the subjects who passed the fit testing. Overall fit test passing rate for all four models of FFRs was 67%. Of these, 29% had PFs <10 (the Occupational Safety and Health Administration Assigned Protection Factor designated for this type of respirator). When only subjects that passed fit testing were included, PFs improved with 9% having values <10. On average, the PFs were 1.4 times (29.5/21.5) higher when only data for those who passed fit testing were included. The minimum PFs were consistently observed in the particle size range of 0.08-0.2 μm. Overall PFs increased when subjects passed fit testing. The results support the value of fit testing but also show for the first time that PFs are dependent on particle size regardless of fit testing status.
Modeling a failure criterion for U-Mo/Al dispersion fuel
NASA Astrophysics Data System (ADS)
Oh, Jae-Yong; Kim, Yeon Soo; Tahk, Young-Wook; Kim, Hyun-Jung; Kong, Eui-Hyun; Yim, Jeong-Sik
2016-05-01
The breakaway swelling in U-Mo/Al dispersion fuel is known to be caused by large pore formation enhanced by interaction layer (IL) growth between fuel particles and Al matrix. In this study, a critical IL thickness was defined as a criterion for the formation of a large pore in U-Mo/Al dispersion fuel. Specifically, the critical IL thickness is given when two neighboring fuel particles come into contact with each other in the developed IL. The model was verified using the irradiation data from the RERTR tests and KOMO-4 test. The model application to full-sized sample irradiations such as IRISs, FUTURE, E-FUTURE, and AFIP-1 tests resulted in conservative predictions. The parametric study revealed that the fuel particle size and the homogeneity of the fuel particle distribution are influential for fuel performance.
Model of Image Artifacts from Dust Particles
NASA Technical Reports Server (NTRS)
Willson, Reg
2008-01-01
A mathematical model of image artifacts produced by dust particles on lenses has been derived. Machine-vision systems often have to work with camera lenses that become dusty during use. Dust particles on the front surface of a lens produce image artifacts that can potentially affect the performance of a machine-vision algorithm. The present model satisfies a need for a means of synthesizing dust image artifacts for testing machine-vision algorithms for robustness (or the lack thereof) in the presence of dust on lenses. A dust particle can absorb light or scatter light out of some pixels, thereby giving rise to a dark dust artifact. It can also scatter light into other pixels, thereby giving rise to a bright dust artifact. For the sake of simplicity, this model deals only with dark dust artifacts. The model effectively represents dark dust artifacts as an attenuation image consisting of an array of diffuse darkened spots centered at image locations corresponding to the locations of dust particles. The dust artifacts are computationally incorporated into a given test image by simply multiplying the brightness value of each pixel by a transmission factor that incorporates the factor of attenuation, by dust particles, of the light incident on that pixel. With respect to computation of the attenuation and transmission factors, the model is based on a first-order geometric (ray)-optics treatment of the shadows cast by dust particles on the image detector. In this model, the light collected by a pixel is deemed to be confined to a pair of cones defined by the location of the pixel s image in object space, the entrance pupil of the lens, and the location of the pixel in the image plane (see Figure 1). For simplicity, it is assumed that the size of a dust particle is somewhat less than the diameter, at the front surface of the lens, of any collection cone containing all or part of that dust particle. Under this assumption, the shape of any individual dust particle artifact is the shape (typically, circular) of the aperture, and the contribution of the particle to the attenuation factor for a given pixel is the fraction of the cross-sectional area of the collection cone occupied by the particle. Assuming that dust particles do not overlap, the net transmission factor for a given pixel is calculated as one minus the sum of attenuation factors contributed by all dust particles affecting that pixel. In a test, the model was used to synthesize attenuation images for random distributions of dust particles on the front surface of a lens at various relative aperture (F-number) settings. As shown in Figure 2, the attenuation images resembled dust artifacts in real test images recorded while the lens was aimed at a white target.
Rengasamy, Samy; Eimer, Benjamin C
2012-01-01
National Institute for Occupational Safety and Health (NIOSH) certification test methods employ charge neutralized NaCl or dioctyl phthalate (DOP) aerosols to measure filter penetration levels of air-purifying particulate respirators photometrically using a TSI 8130 automated filter tester at 85 L/min. A previous study in our laboratory found that widely different filter penetration levels were measured for nanoparticles depending on whether a particle number (count)-based detector or a photometric detector was used. The purpose of this study was to better understand the influence of key test parameters, including filter media type, challenge aerosol size range, and detector system. Initial penetration levels for 17 models of NIOSH-approved N-, R-, and P-series filtering facepiece respirators were measured using the TSI 8130 photometric method and compared with the particle number-based penetration (obtained using two ultrafine condensation particle counters) for the same challenge aerosols generated by the TSI 8130. In general, the penetration obtained by the photometric method was less than the penetration obtained with the number-based method. Filter penetration was also measured for ambient room aerosols. Penetration measured by the TSI 8130 photometric method was lower than the number-based ambient aerosol penetration values. Number-based monodisperse NaCl aerosol penetration measurements showed that the most penetrating particle size was in the 50 nm range for all respirator models tested, with the exception of one model at ~200 nm size. Respirator models containing electrostatic filter media also showed lower penetration values with the TSI 8130 photometric method than the number-based penetration obtained for the most penetrating monodisperse particles. Results suggest that to provide a more challenging respirator filter test method than what is currently used for respirators containing electrostatic media, the test method should utilize a sufficient number of particles <100 nm and a count (particle number)-based detector.
NASA Astrophysics Data System (ADS)
Frederickson, Lee Thomas
Much of combustion research focuses on reducing soot particulates in emissions. However, current research at San Diego State University (SDSU) Combustion and Solar Energy Laboratory (CSEL) is underway to develop a high temperature solar receiver which will utilize carbon nanoparticles as a solar absorption medium. To produce carbon nanoparticles for the small particle heat exchange receiver (SPHER), a lab-scale carbon particle generator (CPG) has been built and tested. The CPG is a heated ceramic tube reactor with a set point wall temperature of 1100-1300°C operating at 5-6 bar pressure. Natural gas and nitrogen are fed to the CPG where natural gas undergoes pyrolysis resulting in carbon particles. The gas-particle mixture is met downstream with dilution air and sent to the lab scale solar receiver. To predict soot yield and general trends in CPG performance, a model has been setup in Reaction Design CHEMKIN-PRO software. One of the primary goals of this research is to accurately measure particle properties. Mean particle diameter, size distribution, and index of refraction are calculated using Scanning Electron Microscopy (SEM) and a Diesel Particulate Scatterometer (DPS). Filter samples taken during experimentation are analyzed to obtain a particle size distribution with SEM images processed in ImageJ software. These results are compared with the DPS, which calculates the particle size distribution and the index of refraction from light scattering using Mie theory. For testing with the lab scale receiver, a particle diameter range of 200-500 nm is desired. Test conditions are varied to understand effects of operating parameters on particle size and the ability to obtain the size range. Analysis of particle loading is the other important metric for this research. Particle loading is measured downstream of the CPG outlet and dilution air mixing point. The air-particle mixture flows through an extinction tube where opacity of the mixture is measured with a 532 nm laser and detector. Beer's law is then used to calculate particle loading. The CPG needs to produce a certain particle loading for a corresponding receiver test. By obtaining the particle loading in the system, the reaction conversion to solid carbon in the CPG can be calculated to measure the efficiency of the CPG. To predict trends in reaction conversion and particle size from experimentation, the CHEMKIN-PRO computer model for the CPG is run for various flow rates and wall temperature profiles. These predictions were a reason for testing at higher wall set point temperatures. Based on these research goals, it was shown that the CPG consistently produces a mean particle diameter of 200-400 nm at the conditions tested, fitting perfectly inside the desired range. This led to successful lab scale SPHER testing which produced a 10-point efficiency increase and 150°C temperature difference with particles present. Also, at 3 g/s dilution air flow rate, an efficiency of 80% at an outlet temperature above 800°C was obtained. Promise was shown at higher CPG experimental temperatures to produce higher reaction conversion, both experimentally and in the model. However, based on wall temperature data taken during experimentation, it is apparent that the CPG needs to have multiple heating zones with separate temperature controllers in order to have an isothermal zone rather than a parabolic temperature profile. As for the computer model, it predicted much higher reaction conversion at higher temperature. The mass fraction of fuel in the inlet stream was shown to not affect conversion while increasing residence time led to increasing conversion. Particle size distribution in the model was far off and showed a bimodal distribution for one of the statistical methods. Using the results from experimentation and modeling, a preliminary CPG design is presented that will operate in a 5MWth receiver system.
NASA Astrophysics Data System (ADS)
Faroughi, S. A.; Huber, C.
2015-12-01
Crystal settling and bubbles migration in magmas have significant effects on the physical and chemical evolution of magmas. The rate of phase segregation is controlled by the force balance that governs the migration of particles suspended in the melt. The relative velocity of a single particle or bubble in a quiescent infinite fluid (melt) is well characterized; however, the interplay between particles or bubbles in suspensions and emulsions and its effect on their settling/rising velocity remains poorly quantified. We propose a theoretical model for the hindered velocity of non-Brownian emulsions of nondeformable droplets, and suspensions of spherical solid particles in the creeping flow regime. The model is based on three sets of hydrodynamic corrections: two on the drag coefficient experienced by each particle to account for both return flow and Smoluchowski effects and a correction on the mixture rheology to account for nonlocal interactions between particles. The model is then extended for mono-disperse non-spherical solid particles that are randomly oriented. The non-spherical particles are idealized as spheroids and characterized by their aspect ratio. The poly-disperse nature of natural suspensions is then taken into consideration by introducing an effective volume fraction of particles for each class of mono-disperse particles sizes. Our model is tested against new and published experimental data over a wide range of particle volume fraction and viscosity ratios between the constituents of dispersions. We find an excellent agreement between our model and experiments. We also show two significant applications for our model: (1) We demonstrate that hindered settling can increase mineral residence time by up to an order of magnitude in convecting magma chambers. (2) We provide a model to correct for particle interactions in the conventional hydrometer test to estimate the particle size distribution in soils. Our model offers a greatly improved agreement with the results obtained with direct measurement methods such as laser diffraction.
Modeling a failure criterion for U–Mo/Al dispersion fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oh, Jae-Yong; Kim, Yeon Soo; Tahk, Young-Wook
2016-05-01
The breakaway swelling in U-Mo/Al dispersion fuel is known to be caused by large pore formation enhanced by interaction layer (IL) growth between fuel particles and Al matrix. In this study, a critical IL thickness was defined as a criterion for the formation of a large pore in U-Mo/Al dispersion fuel. Specifically, the critical IL thickness is given when two neighboring fuel particles come into contact with each other in the developed IL. The model was verified using the irradiation data from the RERTR tests and KOMO- 4 test. The model application to full-sized sample irradiations such as IRISs, FUTURE,more » E-FUTURE, and AFIP-1 tests resulted in conservative predictions. The parametric study revealed that the fuel particle size and the homogeneity of the fuel particle distribution are influential for fuel performance.« less
Bayesian approach to MSD-based analysis of particle motion in live cells.
Monnier, Nilah; Guo, Syuan-Ming; Mori, Masashi; He, Jun; Lénárt, Péter; Bathe, Mark
2012-08-08
Quantitative tracking of particle motion using live-cell imaging is a powerful approach to understanding the mechanism of transport of biological molecules, organelles, and cells. However, inferring complex stochastic motion models from single-particle trajectories in an objective manner is nontrivial due to noise from sampling limitations and biological heterogeneity. Here, we present a systematic Bayesian approach to multiple-hypothesis testing of a general set of competing motion models based on particle mean-square displacements that automatically classifies particle motion, properly accounting for sampling limitations and correlated noise while appropriately penalizing model complexity according to Occam's Razor to avoid over-fitting. We test the procedure rigorously using simulated trajectories for which the underlying physical process is known, demonstrating that it chooses the simplest physical model that explains the observed data. Further, we show that computed model probabilities provide a reliability test for the downstream biological interpretation of associated parameter values. We subsequently illustrate the broad utility of the approach by applying it to disparate biological systems including experimental particle trajectories from chromosomes, kinetochores, and membrane receptors undergoing a variety of complex motions. This automated and objective Bayesian framework easily scales to large numbers of particle trajectories, making it ideal for classifying the complex motion of large numbers of single molecules and cells from high-throughput screens, as well as single-cell-, tissue-, and organism-level studies. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Environmental tobacco smoke particles in multizone indoor environments
NASA Astrophysics Data System (ADS)
Miller, S. L.; Nazaroff, W. W.
Environmental tobacco smoke (ETS) is a major source of human exposure to airborne particles. To better understand the factors that affect exposure, and to investigate the potential effectiveness of technical control measures, a series of experiments was conducted in a two-room test facility. Particle concentrations, size distributions, and airflow rates were measured during and after combustion of a cigarette. Experiments were varied to obtain information about the effects on exposure of smoker segregation, ventilation modification, and air filtration. The experimental data were used to test the performance of an analytical model of the two-zone environment and a numerical multizone aerosol dynamics model. A respiratory tract particle deposition model was also applied to the results to estimate the mass of ETS particles that would be deposited in the lungs of a nonsmoker exposed in either the smoking or nonsmoking room. Comparisons between the experimental data and model predictions showed good agreement. For time-averaged particle mass concentration, the average bias between model and experiments was less than 10%. The average absolute error was typically 35%, probably because of variability in particle emission rates from cigarettes. For the conditions tested, the use of a portable air filtration unit yielded 65-90% reductions in predicted lung deposition relative to the baseline scenario. The use of exhaust ventilation in the smoking room reduced predicted lung deposition in the nonsmoking room by more than 80%, as did segregating the smoker from nonsmokers with a closed door.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Dennis G.; Smith, Jordan N.; Thrall, Brian D.
The development of particokinetic models describing the delivery of insoluble or poorly soluble nanoparticles to cells in liquid cell culture systems has improved the basis for dose-response analysis, hazard ranking from high-throughput systems, and now allows for translation of exposures across in vitro and in vivo test systems. Complimentary particokinetic models that address processes controlling delivery of both particles and released ions to cells, and the influence of particle size changes from dissolution on particle delivery for cell-culture systems would help advance our understanding of the role of particles ion dosimetry on cellular toxicology. We developed ISD3, an extension ofmore » our previously published model for insoluble particles, by deriving a specific formulation of the Population Balance Equation for soluble particles. ISD3 describes the time, concentration and particle size dependent dissolution of particles, their delivery to cells, and the delivery and uptake of ions to cells in in vitro liquid test systems. The model is modular, and can be adapted by application of any empirical model of dissolution, alternative approaches to calculating sedimentation rates, and cellular uptake or treatment of boundary conditions. We apply the model to calculate the particle and ion dosimetry of nanosilver and silver ions in vitro after calibration of two empirical models, one for particle dissolution and one for ion uptake. The results demonstrate utility and accuracy of the ISD3 framework for dosimetry in these systems. Total media ion concentration, particle concentration and total cell-associated silver time-courses were well described by the model, across 2 concentrations of 20 and 110 nm particles. ISD3 was calibrated to dissolution data for 20 nm particles as a function of serum protein concentration, but successfully described the media and cell dosimetry time-course for both particles at all concentrations and time points. We also report the finding that protein content in media has effects both on the initial rate of dissolution and the resulting near-steady state ion concentration in solution.« less
Diffusion and mobility of atomic particles in a liquid
NASA Astrophysics Data System (ADS)
Smirnov, B. M.; Son, E. E.; Tereshonok, D. V.
2017-11-01
The diffusion coefficient of a test atom or molecule in a liquid is determined for the mechanism where the displacement of the test molecule results from the vibrations and motion of liquid molecules surrounding the test molecule and of the test particle itself. This leads to a random change in the coordinate of the test molecule, which eventually results in the diffusion motion of the test particle in space. Two models parameters of interaction of a particle and a liquid are used to find the activation energy of the diffusion process under consideration: the gas-kinetic cross section for scattering of test molecules in the parent gas and the Wigner-Seitz radius for test molecules. In the context of this approach, we have calculated the diffusion coefficient of atoms and molecules in water, where based on experimental data, we have constructed the dependence of the activation energy for the diffusion of test molecules in water on the interaction parameter and the temperature dependence for diffusion coefficient of atoms or molecules in water within the models considered. The statistically averaged difference of the activation energies for the diffusion coefficients of different test molecules in water that we have calculated based on each of the presented models does not exceed 10% of the diffusion coefficient itself. We have considered the diffusion of clusters in water and present the dependence of the diffusion coefficient on the cluster size. The accuracy of the presented formulas for the diffusion coefficient of atomic particles in water is estimated to be 50%.
Collision Models for Particle Orbit Code on SSX
NASA Astrophysics Data System (ADS)
Fisher, M. W.; Dandurand, D.; Gray, T.; Brown, M. R.; Lukin, V. S.
2011-10-01
Coulomb collision models are being developed and incorporated into the Hamiltonian particle pushing code (PPC) for applications to the Swarthmore Spheromak eXperiment (SSX). A Monte Carlo model based on that of Takizuka and Abe [JCP 25, 205 (1977)] performs binary collisions between test particles and thermal plasma field particles randomly drawn from a stationary Maxwellian distribution. A field-based electrostatic fluctuation model scatters particles from a spatially uniform random distribution of positive and negative spherical potentials generated throughout the plasma volume. The number, radii, and amplitude of these potentials are chosen to mimic the correct particle diffusion statistics without the use of random particle draws or collision frequencies. An electromagnetic fluctuating field model will be presented, if available. These numerical collision models will be benchmarked against known analytical solutions, including beam diffusion rates and Spitzer resistivity, as well as each other. The resulting collisional particle orbit models will be used to simulate particle collection with electrostatic probes in the SSX wind tunnel, as well as particle confinement in typical SSX fields. This work has been supported by US DOE, NSF and ONR.
Nonlinear data assimilation using synchronization in a particle filter
NASA Astrophysics Data System (ADS)
Rodrigues-Pinheiro, Flavia; Van Leeuwen, Peter Jan
2017-04-01
Current data assimilation methods still face problems in strongly nonlinear cases. A promising solution is a particle filter, which provides a representation of the model probability density function by a discrete set of particles. However, the basic particle filter does not work in high-dimensional cases. The performance can be improved by considering the proposal density freedom. A potential choice of proposal density might come from the synchronisation theory, in which one tries to synchronise the model with the true evolution of a system using one-way coupling via the observations. In practice, an extra term is added to the model equations that damps growth of instabilities on the synchronisation manifold. When only part of the system is observed synchronization can be achieved via a time embedding, similar to smoothers in data assimilation. In this work, two new ideas are tested. First, ensemble-based time embedding, similar to an ensemble smoother or 4DEnsVar is used on each particle, avoiding the need for tangent-linear models and adjoint calculations. Tests were performed using Lorenz96 model for 20, 100 and 1000-dimension systems. Results show state-averaged synchronisation errors smaller than observation errors even in partly observed systems, suggesting that the scheme is a promising tool to steer model states to the truth. Next, we combine these efficient particles using an extension of the Implicit Equal-Weights Particle Filter, a particle filter that ensures equal weights for all particles, avoiding filter degeneracy by construction. Promising results will be shown on low- and high-dimensional Lorenz96 models, and the pros and cons of these new ideas will be discussed.
NASA Astrophysics Data System (ADS)
Japuntich, Daniel A.; Franklin, Luke M.; Pui, David Y.; Kuehn, Thomas H.; Kim, Seong Chan; Viner, Andrew S.
2007-01-01
Two different air filter test methodologies are discussed and compared for challenges in the nano-sized particle range of 10-400 nm. Included in the discussion are test procedure development, factors affecting variability and comparisons between results from the tests. One test system which gives a discrete penetration for a given particle size is the TSI 8160 Automated Filter tester (updated and commercially available now as the TSI 3160) manufactured by the TSI, Inc., Shoreview, MN. Another filter test system was developed utilizing a Scanning Mobility Particle Sizer (SMPS) to sample the particle size distributions downstream and upstream of an air filter to obtain a continuous percent filter penetration versus particle size curve. Filtration test results are shown for fiberglass filter paper of intermediate filtration efficiency. Test variables affecting the results of the TSI 8160 for NaCl and dioctyl phthalate (DOP) particles are discussed, including condensation particle counter stability and the sizing of the selected particle challenges. Filter testing using a TSI 3936 SMPS sampling upstream and downstream of a filter is also shown with a discussion of test variables and the need for proper SMPS volume purging and filter penetration correction procedure. For both tests, the penetration versus particle size curves for the filter media studied follow the theoretical Brownian capture model of decreasing penetration with decreasing particle diameter down to 10 nm with no deviation. From these findings, the authors can say with reasonable confidence that there is no evidence of particle thermal rebound in the size range.
Thomas, Dennis G; Smith, Jordan N; Thrall, Brian D; Baer, Donald R; Jolley, Hadley; Munusamy, Prabhakaran; Kodali, Vamsi; Demokritou, Philip; Cohen, Joel; Teeguarden, Justin G
2018-01-25
The development of particokinetic models describing the delivery of insoluble or poorly soluble nanoparticles to cells in liquid cell culture systems has improved the basis for dose-response analysis, hazard ranking from high-throughput systems, and now allows for translation of exposures across in vitro and in vivo test systems. Complimentary particokinetic models that address processes controlling delivery of both particles and released ions to cells, and the influence of particle size changes from dissolution on particle delivery for cell-culture systems would help advance our understanding of the role of particles and ion dosimetry on cellular toxicology. We developed ISD3, an extension of our previously published model for insoluble particles, by deriving a specific formulation of the Population Balance Equation for soluble particles. ISD3 describes the time, concentration and particle size dependent dissolution of particles, their delivery to cells, and the delivery and uptake of ions to cells in in vitro liquid test systems. We applied the model to calculate the particle and ion dosimetry of nanosilver and silver ions in vitro after calibration of two empirical models, one for particle dissolution and one for ion uptake. Total media ion concentration, particle concentration and total cell-associated silver time-courses were well described by the model, across 2 concentrations of 20 and 110 nm particles. ISD3 was calibrated to dissolution data for 20 nm particles as a function of serum protein concentration, but successfully described the media and cell dosimetry time-course for both particles at all concentrations and time points. We also report the finding that protein content in media affects the initial rate of dissolution and the resulting near-steady state ion concentration in solution for the systems we have studied. By combining experiments and modeling, we were able to quantify the influence of proteins on silver particle solubility, determine the relative amounts of silver ions and particles in exposed cells, and demonstrate the influence of particle size changes resulting from dissolution on particle delivery to cells in culture. ISD3 is modular and can be adapted to new applications by replacing descriptions of dissolution, sedimentation and boundary conditions with those appropriate for particles other than silver.
Microparticle Separation by Cyclonic Separation
NASA Astrophysics Data System (ADS)
Karback, Keegan; Leith, Alexander
2017-11-01
The ability to separate particles based on their size has wide ranging applications from the industrial to the medical. Currently, cyclonic separators are primarily used in agriculture and manufacturing to syphon out contaminates or products from an air supply. This has led us to believe that cyclonic separation has more applications than the agricultural and industrial. Using the OpenFoam computational package, we were able to determine the flow parameters of a vortex in a cyclonic separator in order to segregate dust particles to a cutoff size of tens of nanometers. To test the model, we constructed an experiment to separate a test dust of various sized particles. We filled a chamber with Arizona test dust and utilized an acoustic suspension technique to segregate particles finer than a coarse cutoff size and introduce them into the cyclonic separation apparatus where they were further separated via a vortex following our computational model. The size of the particles separated from this experiment will be used to further refine our model. Metropolitan State University of Denver, Colorado University of Denver, Dr. Randall Tagg, Dr. Richard Krantz.
Effect of Particle Size Distribution on Wall Heat Flux in Pulverized-Coal Furnaces and Boilers
NASA Astrophysics Data System (ADS)
Lu, Jun
A mathematical model of combustion and heat transfer within a cylindrical enclosure firing pulverized coal has been developed and tested against two sets of measured data (one is 1993 WSU/DECO Pilot test data, the other one is the International Flame Research Foundation 1964 Test (Beer, 1964)) and one independent code FURN3D from the Argonne National Laboratory (Ahluwalia and IM, 1992). The model called PILC assumes that the system is a sequence of many well-stirred reactors. A char burnout model combining diffusion to the particle surface, pore diffusion, and surface reaction is employed for predicting the char reaction, heat release, and evolution of char. The ash formation model included relates the ash particle size distribution to the particle size distribution of pulverized coal. The optical constants of char and ash particles are calculated from dispersion relations derived from reflectivity, transmissivity and extinction measurements. The Mie theory is applied to determine the extinction and scattering coefficients. The radiation heat transfer is modeled using the virtual zone method, which leads to a set of simultaneous nonlinear algebraic equations for the temperature field within the furnace and on its walls. This enables the heat fluxes to be evaluated. In comparisons with the experimental data and one independent code, the model is successful in predicting gas temperature, wall temperature, and wall radiative flux. When the coal with greater fineness is burnt, the particle size of pulverized coal has a consistent influence on combustion performance: the temperature peak was higher and nearer to burner, the radiation flux to combustor wall increased, and also the absorption and scattering coefficients of the combustion products increased. The effect of coal particle size distribution on absorption and scattering coefficients and wall heat flux is significant. But there is only a small effect on gas temperature and fuel fraction burned; it is speculated that this may be a characteristic special to the test combustor used.
NASA Astrophysics Data System (ADS)
Qin, Pin-pin; Chen, Chui-ce; Pei, Shi-kang; Li, Xin
2017-06-01
The stopping distance of a runaway vehicle is determined by the entry speed, the design of aggregate-filled arrester bed and the longitudinal grade of escape ramp. Although numerous previous studies have been carried out on the influence of speed and grade on stopping distance, taking into account aggregate properties is rare. Firstly, this paper analyzes the interactions between the tire and the aggregate. The tire and the aggregate are abstracted into a big particle unit and a particle combination unit consisting of lots of aggregates, respectively. Secondly this paper proposes an assumption that this interaction is a kind of particle flow. Later, this paper uses some particle properties to describe the tire-particle unit and aggregate-particle unit respectively, then puts forward several simplified steps of modeling by particle flow code in 2 dimensions (PFC2D). Therefore, a PFC2D micro-simulation model of the interactions between the tire and the aggregate is proposed. The parameters of particle properties are then calibrated by three groups of numerical tests. The calibrated model is verified by eight full-scale arrester bed testing data to demonstrate its feasibility and accuracy. This model provides escape ramp designers a feasible simulation method not only for predicting the stopping distance but also considering the aggregate properties.
The Messy Aerosol Submodel MADE3 (v2.0b): Description and a Box Model Test
NASA Technical Reports Server (NTRS)
Kaiser, J. C.; Hendricks, J.; Righi, M.; Riemer, N.; Zaveri, R. A.; Metzger, S.; Aquila, Valentina
2014-01-01
We introduce MADE3 (Modal Aerosol Dynamics model for Europe, adapted for global applications, 3rd generation), an aerosol dynamics submodel for application within the MESSy framework (Modular Earth Submodel System). MADE3 builds on the predecessor aerosol submodels MADE and MADE-in. Its main new features are the explicit representation of coarse particle interactions both with other particles and with condensable gases, and the inclusion of hydrochloric acid (HCl)chloride (Cl) partitioning between the gas and condensed phases. The aerosol size distribution is represented in the new submodel as a superposition of nine lognormal modes: one for fully soluble particles, one for insoluble particles, and one for mixed particles in each of three size ranges (Aitken, accumulation, and coarse mode size ranges). In order to assess the performance of MADE3 we compare it to its predecessor MADE and to the much more detailed particle-resolved aerosol model PartMC-MOSAIC in a box model simulation of an idealized marine boundary layer test case. MADE3 and MADE results are very similar, except in the coarse mode, where the aerosol is dominated by sea spray particles. Cl is reduced in MADE3 with respect to MADE due to the HClCl partitioning that leads to Cl removal from the sea spray aerosol in our test case. Additionally, aerosol nitrate concentration is higher in MADE3 due to the condensation of nitric acid on coarse particles. MADE3 and PartMC- MOSAIC show substantial differences in the fine particle size distributions (sizes about 2 micrometers) that could be relevant when simulating climate effects on a global scale. Nevertheless, the agreement between MADE3 and PartMC-MOSAIC is very good when it comes to coarse particle size distribution, and also in terms of aerosol composition. Considering these results and the well-established ability of MADE in reproducing observed aerosol loadings and composition, MADE3 seems suitable for application within a global model.
Caccamo, M; Ferguson, J D; Veerkamp, R F; Schadt, I; Petriglieri, R; Azzaro, G; Pozzebon, A; Licitra, G
2014-01-01
As part of a larger project aiming to develop management evaluation tools based on results from test-day (TD) models, the objective of this study was to examine the effect of physical composition of total mixed rations (TMR) tested quarterly from March 2006 through December 2008 on milk, fat, and protein yield curves for 25 herds in Ragusa, Sicily. A random regression sire-maternal grandsire model was used to estimate variance components for milk, fat, and protein yields fitted on a full data set, including 241,153 TD records from 9,809 animals in 42 herds recorded from 1995 through 2008. The model included parity, age at calving, year at calving, and stage of pregnancy as fixed effects. Random effects were herd × test date, sire and maternal grandsire additive genetic effect, and permanent environmental effect modeled using third-order Legendre polynomials. Model fitting was carried out using ASREML. Afterward, for the 25 herds involved in the study, 9 particle size classes were defined based on the proportions of TMR particles on the top (19-mm) and middle (8-mm) screen of the Penn State Particle Separator. Subsequently, the model with estimated variance components was used to examine the influence of TMR particle size class on milk, fat, and protein yield curves. An interaction was included with the particle size class and days in milk. The effect of the TMR particle size class was modeled using a ninth-order Legendre polynomial. Lactation curves were predicted from the model while controlling for TMR chemical composition (crude protein content of 15.5%, neutral detergent fiber of 40.7%, and starch of 19.7% for all classes), to have pure estimates of particle distribution not confounded by nutrient content of TMR. We found little effect of class of particle proportions on milk yield and fat yield curves. Protein yield was greater for sieve classes with 10.4 to 17.4% of TMR particles retained on the top (19-mm) sieve. Optimal distributions different from those recommended may reflect regional differences based on climate and types and quality of forages fed. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Far Field Modeling Methods For Characterizing Surface Detonations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrett, A.
2015-10-08
Savannah River National Laboratory (SRNL) analyzed particle samples collected during experiments that were designed to replicate tests of nuclear weapons components that involve detonation of high explosives (HE). SRNL collected the particle samples in the HE debris cloud using innovative rocket propelled samplers. SRNL used scanning electronic microscopy to determine the elemental constituents of the particles and their size distributions. Depleted uranium composed about 7% of the particle contents. SRNL used the particle size distributions and elemental composition to perform transport calculations that indicate in many terrains and atmospheric conditions the uranium bearing particles will be transported long distances downwind.more » This research established that HE tests specific to nuclear proliferation should be detectable at long downwind distances by sampling airborne particles created by the test detonations.« less
Particle Size Effects on CL-20 Initiation and Detonation
NASA Astrophysics Data System (ADS)
Valancius, Cole; Bainbridge, Joe; Love, Cody; Richardson, Duane
2017-06-01
Particle size or specific surface area effects on explosives has been of interest to the explosives community for both application and modeling of initiation and detonation. Different particles sizes of CL-20 were used in detonator experiments to determine the effects of particle size on initiation, run-up to steady state detonation, and steady state detonation. Historical tests have demonstrated a direct relationship between particle size and initiation. However, historical tests inadvertently employed density gradients, making it difficult to discern the effects of particle size from the effects of density. Density gradients were removed from these tests using a larger diameter, shorter charge column, allowing for similar loading across different particle sizes. Without the density gradient, the effects of particle size on initiation and detonation are easier to determine. The results of which contrast with historical results, showing particle size does not directly affect initiation threshold.
Classroom Materials for Teaching "The Particle Nature of Matter." Practical Paper No. 173.
ERIC Educational Resources Information Center
Pella, Milton O.; And Others
This document presents the lesson plans and tests used in the research study reported in Technical Report 173 (ED 070 658), together with descriptions of models and films developed for the teaching program. Thirty-one lessons are included, covering the topics of matter and energy; making interferences; particles; a model for matter; particles and…
Atmospheric fate and transport of fine volcanic ash: Does particle shape matter?
NASA Astrophysics Data System (ADS)
White, C. M.; Allard, M. P.; Klewicki, J.; Proussevitch, A. A.; Mulukutla, G.; Genareau, K.; Sahagian, D. L.
2013-12-01
Volcanic ash presents hazards to infrastructure, agriculture, and human and animal health. In particular, given the economic importance of intercontinental aviation, understanding how long ash is suspended in the atmosphere, and how far it is transported has taken on greater importance. Airborne ash abrades the exteriors of aircraft, enters modern jet engines and melts while coating interior engine parts causing damage and potential failure. The time fine ash stays in the atmosphere depends on its terminal velocity. Existing models of ash terminal velocities are based on smooth, quasi-spherical particles characterized by Stokes velocity. Ash particles, however, violate the various assumptions upon which Stokes flow and associated models are based. Ash particles are non-spherical and can have complex surface and internal structure. This suggests that particle shape may be one reason that models fail to accurately predict removal rates of fine particles from volcanic ash clouds. The present research seeks to better parameterize predictive models for ash particle terminal velocities, diffusivity, and dispersion in the atmospheric boundary layer. The fundamental hypothesis being tested is that particle shape irreducibly impacts the fate and transport properties of fine volcanic ash. Pilot studies, incorporating modeling and experiments, are being conducted to test this hypothesis. Specifically, a statistical model has been developed that can account for actual volcanic ash size distributions, complex ash particle geometry, and geometry variability. Experimental results are used to systematically validate and improve the model. The experiments are being conducted at the Flow Physics Facility (FPF) at UNH. Terminal velocities and dispersion properties of fine ash are characterized using still air drop experiments in an unconstrained open space using a homogenized mix of source particles. Dispersion and sedimentation dynamics are quantified using particle image velocimetry (PIV). Scanning Electron Microscopy (SEM) of ash particles collected in localized deposition areas is used to correlate the PIV results to particle shape. In addition, controlled wind tunnel experiments are used to determine particle fate and transport in a turbulent boundary layer for a mixed particle population. Collectively, these studies will provide an improved understanding of the effects of particle shape on sedimentation and dispersion, and foundational data for the predictive modeling of the fate and transport of fine ash particles suspended in the atmosphere.
Hamlett, Christopher A E; Shirtcliffe, Neil J; McHale, Glen; Ahn, Sujung; Bryant, Robert; Doerr, Stefan H; Newton, Michael I
2011-11-15
The wettability of soil is of great importance for plants and soil biota, and in determining the risk for preferential flow, surface runoff, flooding,and soil erosion. The molarity of ethanol droplet (MED) test is widely used for quantifying the severity of water repellency in soils that show reduced wettability and is assumed to be independent of soil particle size. The minimum ethanol concentration at which droplet penetration occurs within a short time (≤ 10 s) provides an estimate of the initial advancing contact angle at which spontaneous wetting is expected. In this study, we test the assumption of particle size independence using a simple model of soil, represented by layers of small (~0.2-2 mm) diameter beads that predict the effect of changing bead radius in the top layer on capillary driven imbibition. Experimental results using a three-layer bead system show broad agreement with the model and demonstrate a dependence of the MED test on particle size. The results show that the critical initial advancing contact angle for penetration can be considerably less than 90° and varies with particle size, demonstrating that a key assumption currently used in the MED testing of soil is not necessarily valid.
Attrition of fluid cracking catalyst in fluidized beds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boerefijn, R.; Ghadiri, M.
1996-12-31
Particle attrition in fluid catalytic cracking units causes loss of catalyst, which could amount to a few tonnes per day! The dependence of attrition on the process conditions and catalyst properties is therefore of great industrial interest, but it is however not well established at present. The process of attrition in the jetting area of fluidised beds is addressed and the attrition test method of Forsythe & Hertwig is analysed in this paper. This method is used commonly to assess the attrition propensity of FCC powder, whereby the attrition rate in a single jet at very high orifice velocity (300more » m s{sup -1}) is measured. There has been some concern on the relevance of this method to attrition in FCC units. Therefore, a previously-developed model of attrition in the jetting region is employed in an attempt to establish a solid basis of interpretation of the Forsythe & Hertwig test and its application as an industrial standard test. The model consists of two parts. The first part predicts the solids flow patterns in the jet region, simulating numerically the Forsythe & Hertwig test. The second part models the breakage of single particles upon impact. Combining these two models, thus linking single particle mechanical properties to macroscopic flow phenomena, results in the modelling of the attrition rate of particles entrained into a single high speed jet. High speed video recordings are made of a single jet in a two-dimensional fluidised bed, at up to 40500 frames per second, in order to quantify some of the model parameters. Digital analysis of the video images yields values for particle velocities and entrainment rates in the jet, which can be compared to model predictions. 15 refs., 8 figs.« less
Unique DNA-barcoded aerosol test particles for studying aerosol transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harding, Ruth N.; Hara, Christine A.; Hall, Sara B.
Data are presented for the first use of novel DNA-barcoded aerosol test particles that have been developed to track the fate of airborne contaminants in populated environments. Until DNATrax (DNA Tagged Reagents for Aerosol eXperiments) particles were developed, there was no way to rapidly validate air transport models with realistic particles in the respirable range of 1–10 μm in diameter. The DNATrax particles, developed at Lawrence Livermore National Laboratory (LLNL) and tested with the assistance of the Pentagon Force Protection Agency, are the first safe and effective materials for aerosol transport studies that are identified by DNA molecules. The usemore » of unique synthetic DNA barcodes overcomes the challenges of discerning the test material from pre-existing environmental or background contaminants (either naturally occurring or previously released). The DNATrax particle properties are demonstrated to have appropriate size range (approximately 1–4.5 μm in diameter) to accurately simulate bacterial spore transport. As a result, we describe details of the first field test of the DNATrax aerosol test particles in a large indoor facility.« less
Unique DNA-barcoded aerosol test particles for studying aerosol transport
Harding, Ruth N.; Hara, Christine A.; Hall, Sara B.; ...
2016-03-22
Data are presented for the first use of novel DNA-barcoded aerosol test particles that have been developed to track the fate of airborne contaminants in populated environments. Until DNATrax (DNA Tagged Reagents for Aerosol eXperiments) particles were developed, there was no way to rapidly validate air transport models with realistic particles in the respirable range of 1–10 μm in diameter. The DNATrax particles, developed at Lawrence Livermore National Laboratory (LLNL) and tested with the assistance of the Pentagon Force Protection Agency, are the first safe and effective materials for aerosol transport studies that are identified by DNA molecules. The usemore » of unique synthetic DNA barcodes overcomes the challenges of discerning the test material from pre-existing environmental or background contaminants (either naturally occurring or previously released). The DNATrax particle properties are demonstrated to have appropriate size range (approximately 1–4.5 μm in diameter) to accurately simulate bacterial spore transport. As a result, we describe details of the first field test of the DNATrax aerosol test particles in a large indoor facility.« less
Dynamics of particles accelerated by head-on collisions of two magnetized plasma shocks
NASA Astrophysics Data System (ADS)
Takeuchi, Satoshi
2018-02-01
A kinetic model of the head-on collision of two magnetized plasma shocks is analyzed theoretically and in numerical calculations. When two plasmas with anti-parallel magnetic fields collide, they generate magnetic reconnection and form a motional electric field at the front of the collision region. This field accelerates the particles sandwiched between both shock fronts to extremely high energy. As they accelerate, the particles are bent by the transverse magnetic field crossing the magnetic neutral sheet, and their energy gains are reduced. In the numerical calculations, the dynamics of many test particles were modeled through the relativistic equations of motion. The attainable energy gain was obtained by multiplying three parameters: the propagation speed of the shock, the magnitude of the magnetic field, and the acceleration time of the test particle. This mechanism for generating high-energy particles is applicable over a wide range of spatial scales, from laboratory to interstellar plasmas.
NASA Technical Reports Server (NTRS)
Meyer, Marit Elisabeth
2015-01-01
A thermal precipitator (TP) was designed to collect smoke aerosol particles for microscopic analysis in fire characterization research. Information on particle morphology, size and agglomerate structure obtained from these tests supplements additional aerosol data collected. Modeling of the thermal precipitator throughout the design process was performed with the COMSOL Multiphysics finite element software package, including the Eulerian flow field and thermal gradients in the fluid. The COMSOL Particle Tracing Module was subsequently used to determine particle deposition. Modeling provided optimized design parameters such as geometry, flow rate and temperatures. The thermal precipitator was built and testing verified the performance of the first iteration of the device. The thermal precipitator was successfully operated and provided quality particle samples for microscopic analysis, which furthered the body of knowledge on smoke particulates. This information is a key element of smoke characterization and will be useful for future spacecraft fire detection research.
Cheng, Wen-Chang
2012-01-01
In this paper we propose a robust lane detection and tracking method by combining particle filters with the particle swarm optimization method. This method mainly uses the particle filters to detect and track the local optimum of the lane model in the input image and then seeks the global optimal solution of the lane model by a particle swarm optimization method. The particle filter can effectively complete lane detection and tracking in complicated or variable lane environments. However, the result obtained is usually a local optimal system status rather than the global optimal system status. Thus, the particle swarm optimization method is used to further refine the global optimal system status in all system statuses. Since the particle swarm optimization method is a global optimization algorithm based on iterative computing, it can find the global optimal lane model by simulating the food finding way of fish school or insects under the mutual cooperation of all particles. In verification testing, the test environments included highways and ordinary roads as well as straight and curved lanes, uphill and downhill lanes, lane changes, etc. Our proposed method can complete the lane detection and tracking more accurately and effectively then existing options. PMID:23235453
SPARSE—A subgrid particle averaged Reynolds stress equivalent model: testing with a priori closure
Davis, Sean L.; Sen, Oishik; Udaykumar, H. S.
2017-01-01
A Lagrangian particle cloud model is proposed that accounts for the effects of Reynolds-averaged particle and turbulent stresses and the averaged carrier-phase velocity of the subparticle cloud scale on the averaged motion and velocity of the cloud. The SPARSE (subgrid particle averaged Reynolds stress equivalent) model is based on a combination of a truncated Taylor expansion of a drag correction function and Reynolds averaging. It reduces the required number of computational parcels to trace a cloud of particles in Eulerian–Lagrangian methods for the simulation of particle-laden flow. Closure is performed in an a priori manner using a reference simulation where all particles in the cloud are traced individually with a point-particle model. Comparison of a first-order model and SPARSE with the reference simulation in one dimension shows that both the stress and the averaging of the carrier-phase velocity on the cloud subscale affect the averaged motion of the particle. A three-dimensional isotropic turbulence computation shows that only one computational parcel is sufficient to accurately trace a cloud of tens of thousands of particles. PMID:28413341
SPARSE-A subgrid particle averaged Reynolds stress equivalent model: testing with a priori closure.
Davis, Sean L; Jacobs, Gustaaf B; Sen, Oishik; Udaykumar, H S
2017-03-01
A Lagrangian particle cloud model is proposed that accounts for the effects of Reynolds-averaged particle and turbulent stresses and the averaged carrier-phase velocity of the subparticle cloud scale on the averaged motion and velocity of the cloud. The SPARSE (subgrid particle averaged Reynolds stress equivalent) model is based on a combination of a truncated Taylor expansion of a drag correction function and Reynolds averaging. It reduces the required number of computational parcels to trace a cloud of particles in Eulerian-Lagrangian methods for the simulation of particle-laden flow. Closure is performed in an a priori manner using a reference simulation where all particles in the cloud are traced individually with a point-particle model. Comparison of a first-order model and SPARSE with the reference simulation in one dimension shows that both the stress and the averaging of the carrier-phase velocity on the cloud subscale affect the averaged motion of the particle. A three-dimensional isotropic turbulence computation shows that only one computational parcel is sufficient to accurately trace a cloud of tens of thousands of particles.
Mabray, Marc C; Lillaney, Prasheel; Sze, Chia-Hung; Losey, Aaron D; Yang, Jeffrey; Kondapavulur, Sravani; Liu, Derek; Saeed, Maythem; Patel, Anand; Cooke, Daniel; Jun, Young-Wook; El-Sayed, Ivan; Wilson, Mark; Hetts, Steven W
2016-03-01
To establish that a magnetic device designed for intravascular use can bind small iron particles in physiologic flow models. Uncoated iron oxide particles 50-100 nm and 1-5 µm in size were tested in a water flow chamber over a period of 10 minutes without a magnet (ie, control) and with large and small prototype magnets. These same particles and 1-µm carboxylic acid-coated iron oxide beads were likewise tested in a serum flow chamber model without a magnet (ie, control) and with the small prototype magnet. Particles were successfully captured from solution. Particle concentrations in solution decreased in all experiments (P < .05 vs matched control runs). At 10 minutes, concentrations were 98% (50-100-nm particles in water with a large magnet), 97% (50-100-nm particles in water with a small magnet), 99% (1-5-µm particles in water with a large magnet), 99% (1-5-µm particles in water with a small magnet), 95% (50-100-nm particles in serum with a small magnet), 92% (1-5-µm particles in serum with a small magnet), and 75% (1-µm coated beads in serum with a small magnet) lower compared with matched control runs. This study demonstrates the concept of magnetic capture of small iron oxide particles in physiologic flow models by using a small wire-mounted magnetic filter designed for intravascular use. Copyright © 2016 SIR. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gudmundsson, E.; Ehlmann, B. L.; Mustard, J. F.; Hiroi, T.; Poulet, F.
2012-12-01
Two radiative transfer theories, the Hapke and Shkuratov models, have been used to estimate the mineralogic composition of laboratory mixtures of anhydrous mafic minerals from reflected near-infrared light, accurately modeling abundances to within 10%. For this project, we tested the efficacy of the Hapke model for determining the composition of mixtures (weight fraction, particle diameter) containing hydrous minerals, including phyllosilicates. Modal mineral abundances for some binary mixtures were modeled to +/-10% of actual values, but other mixtures showed higher inaccuracies (up to 25%). Consequently, a sensitivity analysis of selected input and model parameters was performed. We first examined the shape of the model's error function (RMS error between modeled and measured spectra) over a large range of endmember weight fractions and particle diameters and found that there was a single global minimum for each mixture (rather than local minima). The minimum was sensitive to modeled particle diameter but comparatively insensitive to modeled endmember weight fraction. Derivation of the endmembers' k optical constant spectra using the Hapke model showed differences with the Shkuratov-derived optical constants originally used. Model runs with different sets of optical constants suggest that slight differences in the optical constants used significantly affect the accuracy of model predictions. Even for mixtures where abundance was modeled correctly, particle diameter agreed inconsistently with sieved particle sizes and varied greatly for individual mix within suite. Particle diameter was highly sensitive to the optical constants, possibly indicating that changes in modeled path length (proportional to particle diameter) compensate for changes in the k optical constant. Alternatively, it may not be appropriate to model path length and particle diameter with the same proportionality for all materials. Across mixtures, RMS error increased in proportion to the fraction of the darker endmember. Analyses are ongoing and further studies will investigate the effect of sample hydration, permitted variability in particle size, assumed photometric functions and use of different wavelength ranges on model results. Such studies will advance understanding of how to best apply radiative transfer modeling to geologically complex planetary surfaces. Corresponding authors: eyjolfur88@gmail.com, ehlmann@caltech.edu
The effect of simulated air conditions on N95 filtering facepiece respirators performance.
Ramirez, Joel A; O'Shaughnessy, Patrick T
2016-07-01
The objective of this study was to determine the effect of several simulated air environmental conditions on the particle penetration and the breathing resistance of two N95 filtering facepiece respirator (FFR) models. The particle penetration and breathing resistance of the respirators were evaluated in a test system developed to mimic inhalation and exhalation breathing while relative humidity and temperature were modified. Breathing resistance was measured over 120 min using a calibrated pressure transducer under four different temperature and relative humidity conditions without aerosol loading. Particle penetration was evaluated before and after the breathing resistance test at room conditions using a sodium chloride aerosol measured with a scanning mobility particle sizer. Results demonstrated that increasing relative humidity and lowering external temperature caused significant increases in breathing resistance (p < 0.001). However, these same conditions did not influence the penetration or most penetrating particle size of the tested FFRs. The increase in breathing resistance varied by FFR model suggesting that some FFR media are less influenced by high relative humidity.
Highlights of the high-temperature falling particle receiver project: 2012 - 2016
NASA Astrophysics Data System (ADS)
Ho, C. K.; Christian, J.; Yellowhair, J.; Jeter, S.; Golob, M.; Nguyen, C.; Repole, K.; Abdel-Khalik, S.; Siegel, N.; Al-Ansary, H.; El-Leathy, A.; Gobereit, B.
2017-06-01
A 1 MWt continuously recirculating falling particle receiver has been demonstrated at Sandia National Laboratories. Free-fall and obstructed-flow receiver designs were tested with particle mass flow rates of ˜1 - 7 kg/s and average irradiances up to 1,000 suns. Average particle outlet temperatures exceeded 700 °C for the free-fall tests and reached nearly 800 °C for the obstructed-flow tests, with peak particle temperatures exceeding 900 °C. High particle heating rates of ˜50 to 200 °C per meter of illuminated drop length were achieved for the free-fall tests with mass flow rates ranging from 1 - 7 kg/s and for average irradiances up to ˜ 700 kW/m2. Higher temperatures were achieved at the lower particle mass flow rates due to less shading. The obstructed-flow design yielded particle heating rates over 300 °C per meter of illuminated drop length for mass flow rates of 1 - 3 kg/s for irradiances up to ˜1,000 kW/m2. The thermal efficiency was determined to be ˜60 - 70% for the free-falling particle tests and up to ˜80% for the obstructed-flow tests. Challenges encountered during the tests include particle attrition and particle loss through the aperture, reduced particle mass flow rates at high temperatures due to slot aperture narrowing and increased friction, and deterioration of the obstructed-flow structures due to wear and oxidation. Computational models were validated using the test data and will be used in future studies to design receiver configurations that can increase the thermal efficiency.
NASA Technical Reports Server (NTRS)
Leach, R. N.; Greeley, Ronald; White, Bruce R.; Iversen, James D.
1987-01-01
In the study of planetary aeolian processes the effect of gravity is not readily modeled. Gravity appears in the equations of particle motion along with the interparticle forces but the two are not separable. A wind tunnel that perimits multiphase flow experiments with wind blown particles at variable gravity was built and experiments were conducted at reduced gravity. The equations of particle motion initiation (saltation threshold) with variable gravity were experimentally verified and the interparticle force was separated. A uniquely design Carousel Wind Tunnel (CWT) allows for the long flow distance in a small sized tunnel since the test section if a continuous loop and develops the required turbulent boundary layer. A prototype model of the tunnel where only the inner drum rotates was built and tested in the KC-135 Weightless Wonder 4 zero-g aircraft. Future work includes further experiments with walnut shell in the KC-135 which sharply graded particles of widely varying median sizes including very small particles to see how interparticle force varies with particle size, and also experiments with other aeolian material.
The complete set of Cassini's UVIS occultation observations of Enceladus plume: model fits
NASA Astrophysics Data System (ADS)
Portyankina, G.; Esposito, L. W.; Hansen, C. J.
2017-12-01
Since the discovery in 2005, plume of Enceladus was observed by most of the instruments onboard Cassini spacecraft. Ultraviolet Imaging Spectrograph (UVIS) have observed Enceladus plume and collimated jets embedded in it in occultational geometry on 6 different occasions. We have constructed a 3D direct simulation Monte Carlo (DSMC) model for Enceladus jets and apply it to the analysis of the full set of UVIS occultation observations conducted during Cassini's mission from 2005 to 2017. The Monte Carlo model tracks test particles from their source at the surface into space. The initial positions of all test particles for a single jet are fixed to one of 100 jets sources identified by Porco et al. (2014). The initial three-dimensional velocity of each particle contains two components: a velocity Vz which is perpendicular to the surface, and a thermal velocity which is isotropic in the upward hemisphere. The direction and speed of the thermal velocity of each particle is chosen randomly but the ensemble moves isotropically at a speed which satisfies a Boltzmann distribution for a given temperature Tth. A range for reasonable Vz is then determined by requiring that modeled jet widths match the observed ones. Each model run results in a set of coordinates and velocities of a given set of test particles. These are converted to the test particle number densities and then integrated along LoS for each time step of the occultation observation. The geometry of the observation is calculated using SPICE. The overarching result of the simulation run is a test particle number density along LoS for each time point during the occultation observation for each of the jets separately. To fit the model to the data, we integrate all jets that are crossed by the LoS at each point during an observation. The relative strength of the jets must be determined to fit the observed UVIS curves. The results of the fits are sets of active jets for each occultation. Each UVIS occultation observation was done under a unique observational geometry. Consequently, the model fits produce different sets of active jets and different minimum Vz. We discuss and compare the results of fitting all UVIS occultation observations.
Performance Test of Laser Velocimeter System for the Langley 16-foot Transonic Tunnel
NASA Technical Reports Server (NTRS)
Meyers, J. F.; Hunter, W. W., Jr.; Reubush, D. E.; Nichols, C. E., Jr.; Hepner, T. E.; Lee, J. W.
1985-01-01
An investigation in the Langley 16-Foot Transonic Tunnel has been conducted in which a laser velocimeter was used to measure free-stream velocities from Mach 0.1 to 1.0 and the flow velocities along the stagnating streamline of a hemisphere-cylinder model at Mach 0.8 and 1.0. The flow velocity was also measured at Mach 1.0 along the line 0.533 model diameters below the model. These tests determined the performance characteristics of the dedicated two-component laser velocimeter at flow velocities up to Mach 1.0 and the effects of the wind tunnel environment on the particle-generating system and on the resulting size of the generated particles. To determine these characteristics, the measured particle velocities along the stagnating streamline at the two Mach numbers were compared with the theoretically predicted gas and particle velocities calculated using a transonic potential flow method. Through this comparison the mean detectable particle size (2.1 micron) along with the standard deviation of the detectable particles (0.76 micron) was determined; thus the performance characteristics of the laser velocimeter were established.
Effects of particle size and heating time on thiobarbituric acid (TBA) test of soybean powder.
Lee, Youn-Ju; Yoon, Won-Byong
2013-06-01
Effects of particle size and heating time during TBA test on the thiobarbituric acid reactive substance (TBARS) of soybean (Glycine Max) powder were studied. Effects of processing variables involved in the pulverization of soybean, such as the temperature of soybean powder, the oxygen level in the vessel, and the pulverisation time, were investigated. The temperature of the soybean powder and the oxygen level had no significant influence on the TBARS (p<0.05). The pulverization time and the heating time during TBA test significantly affected the TBARS. Change of TBARS during heating was well described by the fractional conversion first order kinetics model. A diffusion model was introduced to quantify the effect of particle size on TBARS. The major finding of this study was that the TBA test to estimate the level of the lipid oxidation directly from powders should consider the heating time and the mean particle sizes of the sample. Copyright © 2012 Elsevier Ltd. All rights reserved.
An empirical model of human aspiration in low-velocity air using CFD investigations.
Anthony, T Renée; Anderson, Kimberly R
2015-01-01
Computational fluid dynamics (CFD) modeling was performed to investigate the aspiration efficiency of the human head in low velocities to examine whether the current inhaled particulate mass (IPM) sampling criterion matches the aspiration efficiency of an inhaling human in airflows common to worker exposures. Data from both mouth and nose inhalation, averaged to assess omnidirectional aspiration efficiencies, were compiled and used to generate a unifying model to relate particle size to aspiration efficiency of the human head. Multiple linear regression was used to generate an empirical model to estimate human aspiration efficiency and included particle size as well as breathing and freestream velocities as dependent variables. A new set of simulated mouth and nose breathing aspiration efficiencies was generated and used to test the fit of empirical models. Further, empirical relationships between test conditions and CFD estimates of aspiration were compared to experimental data from mannequin studies, including both calm-air and ultra-low velocity experiments. While a linear relationship between particle size and aspiration is reported in calm air studies, the CFD simulations identified a more reasonable fit using the square of particle aerodynamic diameter, which better addressed the shape of the efficiency curve's decline toward zero for large particles. The ultimate goal of this work was to develop an empirical model that incorporates real-world variations in critical factors associated with particle aspiration to inform low-velocity modifications to the inhalable particle sampling criterion.
Wheeler, M J; Mason, R H; Steunenberg, K; Wagstaff, M; Chou, C; Bertram, A K
2015-05-14
Ice nucleation on mineral dust particles is known to be an important process in the atmosphere. To accurately implement ice nucleation on mineral dust particles in atmospheric simulations, a suitable theory or scheme is desirable to describe laboratory freezing data in atmospheric models. In the following, we investigated ice nucleation by supermicron mineral dust particles [kaolinite and Arizona Test Dust (ATD)] in the immersion mode. The median freezing temperature for ATD was measured to be approximately -30 °C compared with approximately -36 °C for kaolinite. The freezing results were then used to test four different schemes previously used to describe ice nucleation in atmospheric models. In terms of ability to fit the data (quantified by calculating the reduced chi-squared values), the following order was found for ATD (from best to worst): active site, pdf-α, deterministic, single-α. For kaolinite, the following order was found (from best to worst): active site, deterministic, pdf-α, single-α. The variation in the predicted median freezing temperature per decade change in the cooling rate for each of the schemes was also compared with experimental results from other studies. The deterministic model predicts the median freezing temperature to be independent of cooling rate, while experimental results show a weak dependence on cooling rate. The single-α, pdf-α, and active site schemes all agree with the experimental results within roughly a factor of 2. On the basis of our results and previous results where different schemes were tested, the active site scheme is recommended for describing the freezing of ATD and kaolinite particles. We also used our ice nucleation results to determine the ice nucleation active site (INAS) density for the supermicron dust particles tested. Using the data, we show that the INAS densities of supermicron kaolinite and ATD particles studied here are smaller than the INAS densities of submicron kaolinite and ATD particles previously reported in the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quaglioni, S.; Beck, B. R.
The Monte Carlo All Particle Method generator and collision physics library features two models for allowing a particle to either up- or down-scatter due to collisions with material at finite temperature. The two models are presented and compared. Neutron interaction with matter through elastic collisions is used as testing case.
Modeling of Abrasion and Crushing of Unbound Granular Materials During Compaction
NASA Astrophysics Data System (ADS)
Ocampo, Manuel S.; Caicedo, Bernardo
2009-06-01
Unbound compacted granular materials are commonly used in engineering structures as layers in road pavements, railroad beds, highway embankments, and foundations. These structures are generally subjected to dynamic loading by construction operations, traffic and wheel loads. These repeated or cyclic loads cause abrasion and crushing of the granular materials. Abrasion changes a particle's shape, and crushing divides the particle into a mixture of many small particles of varying sizes. Particle breakage is important because the mechanical and hydraulic properties of these materials depend upon their grain size distribution. Therefore, it is important to evaluate the evolution of the grain size distribution of these materials. In this paper an analytical model for unbound granular materials is proposed in order to evaluate particle crushing of gravels and soils subjected to cyclic loads. The model is based on a Markov chain which describes the development of grading changes in the material as a function of stress levels. In the model proposed, each particle size is a state in the system, and the evolution of the material is the movement of particles from one state to another in n steps. Each step is a load cycle, and movement between states is possible with a transition probability. The crushing of particles depends on the mechanical properties of each grain and the packing density of the granular material. The transition probability was calculated using both the survival probability defined by Weibull and the compressible packing model developed by De Larrard. Material mechanical properties are considered using the Weibull probability theory. The size and shape of the grains, as well as the method of processing the packing density are considered using De Larrard's model. Results of the proposed analytical model show a good agreement with the experimental tests carried out using the gyratory compaction test.
Interaction of axions with relativistic spinning particles
NASA Astrophysics Data System (ADS)
Popov, V. A.; Balakin, A. B.
2016-05-01
We consider a covariant phenomenological model, which describes an interaction between a pseudoscalar (axion) field and massive spinning particles. The model extends the Bagrmann-Michel-Telegdy approach in application to the axion electrodynamics. We present some exact solutions and discuss them in the context of experimental tests of the model and axion detection.
Characterization of fine abrasive particles for optical fabrication
NASA Astrophysics Data System (ADS)
Funkenbusch, Paul D.; Zhou, Y. Y.; Takahashi, Toshio; Quesnel, David J.; Lambropoulos, John C.
1995-08-01
Material removal during fine grinding operations is accomplished primarily by the action of individual abrasive particles on the glass surface. The mechanical properties of the abrasive are therefore important. Unfortunately it is difficult to directly measure the mechanical response of abrasives once they reach the scale of approximately 10 microns. As a result mechanical properties of fine abrasives are sometimes characterized in terms of an empirical `friability', based on the response of the abrasive to crushing by a metal ball in a vial. In this paper we report on modeling/experiments designed to more precisely quantify the mechanical properties of fine abrasives and ultimately to relate them to the conditions experienced by bound particles during grinding. Experiments have been performed on various types and sizes of diamond abrasives. The response of the particles is a strong function of the loading conditions and can be tracked by changing the testing parameters. Diamond size is also found to play a critical role, with finer diamonds less susceptible to fracture. A micromechanical model from the literature is employed estimate the forces likely to be seen during testing. We are also developing dynamic models to better predict the forces experienced during `friability' testing as a function of the testing parameters.
The Model Identification Test: A Limited Verbal Science Test
ERIC Educational Resources Information Center
McIntyre, P. J.
1972-01-01
Describes the production of a test with a low verbal load for use with elementary school science students. Animated films were used to present appropriate and inappropriate models of the behavior of particles of matter. (AL)
Mabray, Marc C.; Lillaney, Prasheel; Sze, Chia-Hung; Losey, Aaron D.; Yang, Jeffrey; Kondapavulur, Sravani; Liu, Derek; Saeed, Maythem; Patel, Anand; Cooke, Daniel; Jun, Young-Wook; El-Sayed, Ivan; Wilson, Mark; Hetts, Steven W.
2015-01-01
Purpose To establish that a magnetic device designed for intravascular use can bind small iron particles in physiologic flow models. Materials and Methods Uncoated iron oxide particles 50–100 nm and 1–5 μm in size were tested in a water flow chamber over a period of 10 minutes without a magnet (ie, control) and with large and small prototype magnets. These same particles and 1-μm carboxylic acid–coated iron oxide beads were likewise tested in a serum flow chamber model without a magnet (ie, control) and with the small prototype magnet. Results Particles were successfully captured from solution. Particle concentrations in solution decreased in all experiments (P < .05 vs matched control runs). At 10 minutes, concentrations were 98% (50–100-nm particles in water with a large magnet), 97% (50–100-nm particles in water with a small magnet), 99% (1–5-μm particles in water with a large magnet), 99% (1–5-μm particles in water with a small magnet), 95% (50–100-nm particles in serum with a small magnet), 92% (1–5-μm particles in serum with a small magnet), and 75% (1-μm coated beads in serum with a small magnet) lower compared with matched control runs. Conclusions This study demonstrates the concept of magnetic capture of small iron oxide particles in physiologic flow models by using a small wire-mounted magnetic filter designed for intravascular use. PMID:26706187
NASA Astrophysics Data System (ADS)
Rai, Aakash C.; Lin, Chao-Hsin; Chen, Qingyan
2015-02-01
Ozone-terpene reactions are important sources of indoor ultrafine particles (UFPs), a potential health hazard for human beings. Humans themselves act as possible sites for ozone-initiated particle generation through reactions with squalene (a terpene) that is present in their skin, hair, and clothing. This investigation developed a numerical model to probe particle generation from ozone reactions with clothing worn by humans. The model was based on particle generation measured in an environmental chamber as well as physical formulations of particle nucleation, condensational growth, and deposition. In five out of the six test cases, the model was able to predict particle size distributions reasonably well. The failure in the remaining case demonstrated the fundamental limitations of nucleation models. The model that was developed was used to predict particle generation under various building and airliner cabin conditions. These predictions indicate that ozone reactions with human-worn clothing could be an important source of UFPs in densely occupied classrooms and airliner cabins. Those reactions could account for about 40% of the total UFPs measured on a Boeing 737-700 flight. The model predictions at this stage are indicative and should be improved further.
Particle acceleration in solar active regions being in the state of self-organized criticality.
NASA Astrophysics Data System (ADS)
Vlahos, Loukas
We review the recent observational results on flare initiation and particle acceleration in solar active regions. Elaborating a statistical approach to describe the spatiotemporally intermittent electric field structures formed inside a flaring solar active region, we investigate the efficiency of such structures in accelerating charged particles (electrons and protons). The large-scale magnetic configuration in the solar atmosphere responds to the strong turbulent flows that convey perturbations across the active region by initiating avalanche-type processes. The resulting unstable structures correspond to small-scale dissipation regions hosting strong electric fields. Previous research on particle acceleration in strongly turbulent plasmas provides a general framework for addressing such a problem. This framework combines various electromagnetic field configurations obtained by magnetohydrodynamical (MHD) or cellular automata (CA) simulations, or by employing a statistical description of the field’s strength and configuration with test particle simulations. We work on data-driven 3D magnetic field extrapolations, based on a self-organized criticality models (SOC). A relativistic test-particle simulation traces each particle’s guiding center within these configurations. Using the simulated particle-energy distributions we test our results against observations, in the framework of the collisional thick target model (CTTM) of solar hard X-ray (HXR) emission and compare our results with the current observations.
Mathematical Models of Continuous Flow Electrophoresis
NASA Technical Reports Server (NTRS)
Saville, D. A.; Snyder, R. S.
1985-01-01
Development of high resolution continuous flow electrophoresis devices ultimately requires comprehensive understanding of the ways various phenomena and processes facilitate or hinder separation. A comprehensive model of the actual three dimensional flow, temperature and electric fields was developed to provide guidance in the design of electrophoresis chambers for specific tasks and means of interpreting test data on a given chamber. Part of the process of model development includes experimental and theoretical studies of hydrodynamic stability. This is necessary to understand the origin of mixing flows observed with wide gap gravitational effects. To insure that the model accurately reflects the flow field and particle motion requires extensive experimental work. Another part of the investigation is concerned with the behavior of concentrated sample suspensions with regard to sample stream stability particle-particle interactions which might affect separation in an electric field, especially at high field strengths. Mathematical models will be developed and tested to establish the roles of the various interactions.
A distribution model for the aerial application of granular agricultural particles
NASA Technical Reports Server (NTRS)
Fernandes, S. T.; Ormsbee, A. I.
1978-01-01
A model is developed to predict the shape of the distribution of granular agricultural particles applied by aircraft. The particle is assumed to have a random size and shape and the model includes the effect of air resistance, distributor geometry and aircraft wake. General requirements for the maintenance of similarity of the distribution for scale model tests are derived and are addressed to the problem of a nongeneral drag law. It is shown that if the mean and variance of the particle diameter and density are scaled according to the scaling laws governing the system, the shape of the distribution will be preserved. Distributions are calculated numerically and show the effect of a random initial lateral position, particle size and drag coefficient. A listing of the computer code is included.
Evaluation of new collision-pair selection models in DSMC
NASA Astrophysics Data System (ADS)
Akhlaghi, Hassan; Roohi, Ehsan
2017-10-01
The current paper investigates new collision-pair selection procedures in a direct simulation Monte Carlo (DSMC) method. Collision partner selection based on the random procedure from nearest neighbor particles and deterministic selection of nearest neighbor particles have already been introduced as schemes that provide accurate results in a wide range of problems. In the current research, new collision-pair selections based on the time spacing and direction of the relative movement of particles are introduced and evaluated. Comparisons between the new and existing algorithms are made considering appropriate test cases including fluctuations in homogeneous gas, 2D equilibrium flow, and Fourier flow problem. Distribution functions for number of particles and collisions in cell, velocity components, and collisional parameters (collision separation, time spacing, relative velocity, and the angle between relative movements of particles) are investigated and compared with existing analytical relations for each model. The capability of each model in the prediction of the heat flux in the Fourier problem at different cell numbers, numbers of particles, and time steps is examined. For new and existing collision-pair selection schemes, the effect of an alternative formula for the number of collision-pair selections and avoiding repetitive collisions are investigated via the prediction of the Fourier heat flux. The simulation results demonstrate the advantages and weaknesses of each model in different test cases.
Modeling compressible multiphase flows with dispersed particles in both dense and dilute regimes
NASA Astrophysics Data System (ADS)
McGrath, T.; St. Clair, J.; Balachandar, S.
2018-05-01
Many important explosives and energetics applications involve multiphase formulations employing dispersed particles. While considerable progress has been made toward developing mathematical models and computational methodologies for these flows, significant challenges remain. In this work, we apply a mathematical model for compressible multiphase flows with dispersed particles to existing shock and explosive dispersal problems from the literature. The model is cast in an Eulerian framework, treats all phases as compressible, is hyperbolic, and satisfies the second law of thermodynamics. It directly applies the continuous-phase pressure gradient as a forcing function for particle acceleration and thereby retains relaxed characteristics for the dispersed particle phase that remove the constituent material sound velocity from the eigenvalues. This is consistent with the expected characteristics of dispersed particle phases and can significantly improve the stable time-step size for explicit methods. The model is applied to test cases involving the shock and explosive dispersal of solid particles and compared to data from the literature. Computed results compare well with experimental measurements, providing confidence in the model and computational methods applied.
Particle-based membrane model for mesoscopic simulation of cellular dynamics
NASA Astrophysics Data System (ADS)
Sadeghi, Mohsen; Weikl, Thomas R.; Noé, Frank
2018-01-01
We present a simple and computationally efficient coarse-grained and solvent-free model for simulating lipid bilayer membranes. In order to be used in concert with particle-based reaction-diffusion simulations, the model is purely based on interacting and reacting particles, each representing a coarse patch of a lipid monolayer. Particle interactions include nearest-neighbor bond-stretching and angle-bending and are parameterized so as to reproduce the local membrane mechanics given by the Helfrich energy density over a range of relevant curvatures. In-plane fluidity is implemented with Monte Carlo bond-flipping moves. The physical accuracy of the model is verified by five tests: (i) Power spectrum analysis of equilibrium thermal undulations is used to verify that the particle-based representation correctly captures the dynamics predicted by the continuum model of fluid membranes. (ii) It is verified that the input bending stiffness, against which the potential parameters are optimized, is accurately recovered. (iii) Isothermal area compressibility modulus of the membrane is calculated and is shown to be tunable to reproduce available values for different lipid bilayers, independent of the bending rigidity. (iv) Simulation of two-dimensional shear flow under a gravity force is employed to measure the effective in-plane viscosity of the membrane model and show the possibility of modeling membranes with specified viscosities. (v) Interaction of the bilayer membrane with a spherical nanoparticle is modeled as a test case for large membrane deformations and budding involved in cellular processes such as endocytosis. The results are shown to coincide well with the predicted behavior of continuum models, and the membrane model successfully mimics the expected budding behavior. We expect our model to be of high practical usability for ultra coarse-grained molecular dynamics or particle-based reaction-diffusion simulations of biological systems.
UV-VIS depolarization from Arizona Test Dust particles at exact backscattering angle
NASA Astrophysics Data System (ADS)
Miffre, Alain; Mehri, Tahar; Francis, Mirvatte; Rairoux, Patrick
2016-01-01
In this paper, a controlled laboratory experiment is performed to accurately evaluate the depolarization from mineral dust particles in the exact backward scattering direction (ϴ=180.0±0.2°). The experiment is carried out at two wavelengths simultaneously (λ=355 nm, λ=532 nm), on a determined size and shape distribution of Arizona Test Dust (ATD) particles, used as a proxy for mineral dust particles. After validating the set-up on spherical water droplets, two determined ATD-particle size distributions, representative of mineral dust after long-range transport, are generated to accurately retrieve the UV-VIS depolarization from ATD-particles at exact backscattering angle, which is new. The measured depolarization reaches at most 37.5% at λ=355 nm (35.5% at λ=532 nm), and depends on the particle size distribution. Moreover, these laboratory findings agree with T-matrix numerical simulations, at least for a determined particle size distribution and at a determined wavelength, showing the ability of the spheroidal model to reproduce mineral dust particles in the exact backward scattering direction. However, the spectral dependence of the measured depolarization could not be reproduced with the spheroidal model, even for not evenly distributed aspect ratios. Hence, these laboratory findings can be used to evaluate the applicability of the spheroidal model in the backward scattering direction and moreover, to invert UV-VIS polarization lidar returns, which is useful for radiative transfer and climatology, in which mineral dust particles are strongly involved.
Samson, Shazwani; Basri, Mahiran; Fard Masoumi, Hamid Reza; Abdul Malek, Emilia; Abedi Karjiban, Roghayeh
2016-01-01
A predictive model of a virgin coconut oil (VCO) nanoemulsion system for the topical delivery of copper peptide (an anti-aging compound) was developed using an artificial neural network (ANN) to investigate the factors that influence particle size. Four independent variables including the amount of VCO, Tween 80: Pluronic F68 (T80:PF68), xanthan gum and water were the inputs whereas particle size was taken as the response for the trained network. Genetic algorithms (GA) were used to model the data which were divided into training sets, testing sets and validation sets. The model obtained indicated the high quality performance of the neural network and its capability to identify the critical composition factors for the VCO nanoemulsion. The main factor controlling the particle size was found out to be xanthan gum (28.56%) followed by T80:PF68 (26.9%), VCO (22.8%) and water (21.74%). The formulation containing copper peptide was then successfully prepared using optimum conditions and particle sizes of 120.7 nm were obtained. The final formulation exhibited a zeta potential lower than -25 mV and showed good physical stability towards centrifugation test, freeze-thaw cycle test and storage at temperature 25°C and 45°C. PMID:27383135
Samson, Shazwani; Basri, Mahiran; Fard Masoumi, Hamid Reza; Abdul Malek, Emilia; Abedi Karjiban, Roghayeh
2016-01-01
A predictive model of a virgin coconut oil (VCO) nanoemulsion system for the topical delivery of copper peptide (an anti-aging compound) was developed using an artificial neural network (ANN) to investigate the factors that influence particle size. Four independent variables including the amount of VCO, Tween 80: Pluronic F68 (T80:PF68), xanthan gum and water were the inputs whereas particle size was taken as the response for the trained network. Genetic algorithms (GA) were used to model the data which were divided into training sets, testing sets and validation sets. The model obtained indicated the high quality performance of the neural network and its capability to identify the critical composition factors for the VCO nanoemulsion. The main factor controlling the particle size was found out to be xanthan gum (28.56%) followed by T80:PF68 (26.9%), VCO (22.8%) and water (21.74%). The formulation containing copper peptide was then successfully prepared using optimum conditions and particle sizes of 120.7 nm were obtained. The final formulation exhibited a zeta potential lower than -25 mV and showed good physical stability towards centrifugation test, freeze-thaw cycle test and storage at temperature 25°C and 45°C.
Investigation of Particle Deposition in Internal Cooling Cavities of a Nozzle Guide Vane
NASA Astrophysics Data System (ADS)
Casaday, Brian Patrick
Experimental and computational studies were conducted regarding particle deposition in the internal film cooling cavities of nozzle guide vanes. An experimental facility was fabricated to simulate particle deposition on an impingement liner and upstream surface of a nozzle guide vane wall. The facility supplied particle-laden flow at temperatures up to 1000°F (540°C) to a simplified impingement cooling test section. The heated flow passed through a perforated impingement plate and impacted on a heated flat wall. The particle-laden impingement jets resulted in the buildup of deposit cones associated with individual impingement jets. The deposit growth rate increased with increasing temperature and decreasing impinging velocities. For some low flow rates or high flow temperatures, the deposit cones heights spanned the entire gap between the impingement plate and wall, and grew through the impingement holes. For high flow rates, deposit structures were removed by shear forces from the flow. At low temperatures, deposit formed not only as individual cones, but as ridges located at the mid-planes between impinging jets. A computational model was developed to predict the deposit buildup seen in the experiments. The test section geometry and fluid flow from the experiment were replicated computationally and an Eulerian-Lagrangian particle tracking technique was employed. Several particle sticking models were employed and tested for adequacy. Sticking models that accurately predicted locations and rates in external deposition experiments failed to predict certain structures or rates seen in internal applications. A geometry adaptation technique was employed and the effect on deposition prediction was discussed. A new computational sticking model was developed that predicts deposition rates based on the local wall shear. The growth patterns were compared to experiments under different operating conditions. Of all the sticking models employed, the model based on wall shear, in conjunction with geometry adaptation, proved to be the most accurate in predicting the forms of deposit growth. It was the only model that predicted the changing deposition trends based on flow temperature or Reynolds number, and is recommended for further investigation and application in the modeling of deposition in internal cooling cavities.
Gravitationally influenced particle creation models and late-time cosmic acceleration
NASA Astrophysics Data System (ADS)
Pan, Supriya; Kumar Pal, Barun; Pramanik, Souvik
In this work, we focus on the gravitationally influenced adiabatic particle creation process, a mechanism that does not need any dark energy or modified gravity models to explain the current accelerating phase of the universe. Introducing some particle creation models that generalize some previous models in the literature, we constrain the cosmological scenarios using the latest compilation of the Type Ia Supernovae data only, the first indicator of the accelerating universe. Aside from the observational constraints on the models, we examine the models using two model independent diagnoses, namely the cosmography and Om. Further, we establish the general conditions to test the thermodynamic viabilities of any particle creation model. Our analysis shows that at late-time, the models have close resemblance to that of the ΛCDM cosmology, and the models always satisfy the generalized second law of thermodynamics under certain conditions.
NASA Technical Reports Server (NTRS)
Mitchell, David L.; Chai, Steven K.; Dong, Yayi; Arnott, W. Patrick; Hallett, John
1993-01-01
The 1 November 1986 FIRE I case study was used to test an ice particle growth model which predicts bimodal size spectra in cirrus clouds. The model was developed from an analytically based model which predicts the height evolution of monomodal ice particle size spectra from the measured ice water content (IWC). Size spectra from the monomodal model are represented by a gamma distribution, N(D) = N(sub o)D(exp nu)exp(-lambda D), where D = ice particle maximum dimension. The slope parameter, lambda, and the parameter N(sub o) are predicted from the IWC through the growth processes of vapor diffusion and aggregation. The model formulation is analytical, computationally efficient, and well suited for incorporation into larger models. The monomodal model has been validated against two other cirrus cloud case studies. From the monomodal size spectra, the size distributions which determine concentrations of ice particles less than about 150 mu m are predicted.
AGR-5/6/7 Irradiation Test Predictions using PARFUME
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skerjanc, William F.
PARFUME, (PARticle FUel ModEl) a fuel performance modeling code used for high temperature gas-cooled reactors (HTGRs), was used to model the Advanced Gas Reactor (AGR)-5/6/7 irradiation test using predicted physics and thermal hydraulics data. The AGR-5/6/7 test consists of the combined fifth, sixth, and seventh planned irradiations of the AGR Fuel Development and Qualification Program. The AGR-5/6/7 test train is a multi-capsule, instrumented experiment that is designed for irradiation in the 133.4-mm diameter north east flux trap (NEFT) position of Advanced Test Reactor (ATR). Each capsule contains compacts filled with uranium oxycarbide (UCO) unaltered fuel particles. This report documents themore » calculations performed to predict the failure probability of tristructural isotropic (TRISO)-coated fuel particles during the AGR-5/6/7 experiment. In addition, this report documents the calculated source term from the driver fuel. The calculations include modeling of the AGR-5/6/7 irradiation that is scheduled to occur from October 2017 to April 2021 over a total of 13 ATR cycles, including nine normal cycles and four Power Axial Locator Mechanism (PALM) cycle for a total between 500 – 550 effective full power days (EFPD). The irradiation conditions and material properties of the AGR-5/6/7 test predicted zero fuel particle failures in Capsules 1, 2, and 4. Fuel particle failures were predicted in Capsule 3 due to internal particle pressure. These failures were predicted in the highest temperature compacts. Capsule 5 fuel particle failures were due to inner pyrolytic carbon (IPyC) cracking causing localized stresses concentrations in the SiC layer. This capsule predicted the highest particle failures due to the lower irradiation temperature. In addition, shrinkage of the buffer and IPyC layer during irradiation resulted in formation of a buffer-IPyC gap. The two capsules at the two ends of the test train, Capsules 1 and 5 experienced the smallest buffer-IPyC gap formation due to the lower irradiation fluences and temperatures. Capsule 3 experienced the largest buffer-IPyC gap formation of just under 24 µm. The release fraction of fission products Ag, Cs, and Sr silver (Ag), cesium (Cs), and strontium (Sr) vary depending on capsule location and irradiation temperature. The maximum release fraction of Ag occurs in Capsule 3, reaching up to 84.8% for the TRISO fuel particles. The release fraction of the other two fission products, Cs and Sr are much smaller and, in most cases, less than 1%. The notable exception is again in Capsule 3, where the release fraction for Cs and Sr reach up to 9.7% and 19.1%, respectively.« less
Beamlets from stochastic acceleration
NASA Astrophysics Data System (ADS)
Perri, Silvia; Carbone, Vincenzo
2008-09-01
We investigate the dynamics of a realization of the stochastic Fermi acceleration mechanism. The model consists of test particles moving between two oscillating magnetic clouds and differs from the usual Fermi-Ulam model in two ways. (i) Particles can penetrate inside clouds before being reflected. (ii) Particles can radiate a fraction of their energy during the process. Since the Fermi mechanism is at work, particles are stochastically accelerated, even in the presence of the radiated energy. Furthermore, due to a kind of resonance between particles and oscillating clouds, the probability density function of particles is strongly modified, thus generating beams of accelerated particles rather than a translation of the whole distribution function to higher energy. This simple mechanism could account for the presence of beamlets in some space plasma physics situations.
NASA Technical Reports Server (NTRS)
Toon, O. B.; Turco, R. P.; Hamill, P.; Kiang, C. S.; Whitten, R. C.
1979-01-01
Sensitivity tests were performed on a one-dimensional, physical-chemical model of the unperturbed stratospheric aerosols, and model calculations were compared with observations. The tests and comparisons suggest that coagulation controls the particle number mixing ratio, although the number of condensation nuclei at the tropopause and the diffusion coefficient at high altitudes are also important. The sulfur gas source strength and the aerosol residence time are much more important than the supply of condensation nuclei in establishing mass and large particle concentrations. The particle size is also controlled mainly by gas supply and residence time. In situ observations of the aerosols and laboratory measurements of aerosols, parameters that can provide further information about the physics and chemistry of the stratosphere and the aerosols found there are provided.
Creation and Evolution of Particle Number Asymmetry in an Expanding Universe
NASA Astrophysics Data System (ADS)
Morozumi, T.; Nagao, K. I.; Adam, A. S.; Takata, H.
2017-03-01
We introduce a model which may generate particle number asymmetry in an expanding Universe. The model includes charge parity (CP) violating and particle number violating interactions. The model consists of a real scalar field and a complex scalar field. Starting with an initial condition specified by a density matrix, we show how the asymmetry is created through the interaction and how it evolves at later time. We compute the asymmetry using non-equilibrium quantum field theory and as a first test of the model, we study how the asymmetry evolves in the flat limit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uhlig, W. Casey; Heine, Andreas, E-mail: andreas.heine@emi.fraunhofer.de
2015-11-14
A new measurement technique is suggested to augment the characterization and understanding of hypervelocity projectiles before impact. The electromagnetic technique utilizes magnetic diffusion principles to detect particles, measure velocity, and indicate relative particle dimensions. It is particularly suited for detection of small particles that may be difficult to track utilizing current characterization methods, such as high-speed video or flash radiography but can be readily used for large particle detection, where particle spacing or location is not practical for other measurement systems. In this work, particles down to 2 mm in diameter have been characterized while focusing on confining the detection signalmore » to enable multi-particle characterization with limited particle-to-particle spacing. The focus of the paper is on the theoretical concept and the analysis of its applicability based on analytical and numerical calculation. First proof-of-principle experimental tests serve to further validate the method. Some potential applications are the characterization of particles from a shaped-charge jet after its break-up and investigating debris in impact experiments to test theoretical models for the distribution of particles size, number, and velocity.« less
Respirator Performance against Nanoparticles under Simulated Workplace Activities
Vo, Evanly; Zhuang, Ziqing; Horvatin, Matthew; Liu, Yuewei; He, Xinjian; Rengasamy, Samy
2017-01-01
Filtering facepiece respirators (FFRs) and elastomeric half-mask respirators (EHRs) are commonly used by workers for protection against potentially hazardous particles, including engineered nanoparticles. The purpose of this study was to evaluate the performance of these types of respirators against 10–400 nm particles using human subjects exposed to NaCl aerosols under simulated workplace activities. Simulated workplace protection factors (SWPFs) were measured for eight combinations of respirator models (2 N95 FFRs, 2 P100 FFRs, 2 N95 EHRs, and 2 P100 EHRs) worn by 25 healthy test subjects (13 females and 12 males) with varying face sizes. Before beginning a SWPF test for a given respirator model, each subject had to pass a quantitative fit test. Each SWPF test was performed using a protocol of six exercises for 3 min each: (i) normal breathing, (ii) deep breathing, (iii) moving head side to side, (iv) moving head up and down, (v) bending at the waist, and (vi) a simulated laboratory-vessel cleaning motion. Two scanning mobility particle sizers were used simultaneously to measure the upstream (outside the respirator) and downstream (inside the respirator) test aerosol; SWPF was then calculated as a ratio of the upstream and downstream particle concentrations. In general, geometric mean SWPF (GM-SWPF) was highest for the P100 EHRs, followed by P100 FFRs, N95 EHRs, and N95 FFRs. This trend holds true for nanoparticles (10–100 nm), larger size particles (100–400 nm), and the ‘all size’ range (10–400 nm). All respirators provided better or similar performance levels for 10–100 nm particles as compared to larger 100–400 nm particles. This study found that class P100 respirators provided higher SWPFs compared to class N95 respirators (P<0.05) for both FFR and EHR types. All respirators provided expected performance (i.e. fifth percentile SWPF > 10) against all particle size ranges tested. PMID:26180261
Cloud-In-Cell modeling of shocked particle-laden flows at a ``SPARSE'' cost
NASA Astrophysics Data System (ADS)
Taverniers, Soren; Jacobs, Gustaaf; Sen, Oishik; Udaykumar, H. S.
2017-11-01
A common tool for enabling process-scale simulations of shocked particle-laden flows is Eulerian-Lagrangian Particle-Source-In-Cell (PSIC) modeling where each particle is traced in its Lagrangian frame and treated as a mathematical point. Its dynamics are governed by Stokes drag corrected for high Reynolds and Mach numbers. The computational burden is often reduced further through a ``Cloud-In-Cell'' (CIC) approach which amalgamates groups of physical particles into computational ``macro-particles''. CIC does not account for subgrid particle fluctuations, leading to erroneous predictions of cloud dynamics. A Subgrid Particle-Averaged Reynolds-Stress Equivalent (SPARSE) model is proposed that incorporates subgrid interphase velocity and temperature perturbations. A bivariate Gaussian source distribution, whose covariance captures the cloud's deformation to first order, accounts for the particles' momentum and energy influence on the carrier gas. SPARSE is validated by conducting tests on the interaction of a particle cloud with the accelerated flow behind a shock. The cloud's average dynamics and its deformation over time predicted with SPARSE converge to their counterparts computed with reference PSIC models as the number of Gaussians is increased from 1 to 16. This work was supported by AFOSR Grant No. FA9550-16-1-0008.
Biegert, Edward; Vowinckel, Bernhard; Meiburg, Eckart
2017-03-21
We present a collision model for phase-resolved Direct Numerical Simulations of sediment transport that couple the fluid and particles by the Immersed Boundary Method. Typically, a contact model for these types of simulations comprises a lubrication force for particles in close proximity to another solid object, a normal contact force to prevent particles from overlapping, and a tangential contact force to account for friction. Our model extends the work of previous authors to improve upon the time integration scheme to obtain consistent results for particle-wall collisions. Furthermore, we account for polydisperse spherical particles and introduce new criteria to account formore » enduring contact, which occurs in many sediment transport situations. This is done without using arbitrary values for physically-defined parameters and by maintaining the full momentum balance of a particle in enduring contact. Lastly, we validate our model against several test cases for binary particle-wall collisions as well as the collective motion of a sediment bed sheared by a viscous flow, yielding satisfactory agreement with experimental data by various authors.« less
NASA Astrophysics Data System (ADS)
Biegert, Edward; Vowinckel, Bernhard; Meiburg, Eckart
2017-07-01
We present a collision model for phase-resolved Direct Numerical Simulations of sediment transport that couple the fluid and particles by the Immersed Boundary Method. Typically, a contact model for these types of simulations comprises a lubrication force for particles in close proximity to another solid object, a normal contact force to prevent particles from overlapping, and a tangential contact force to account for friction. Our model extends the work of previous authors to improve upon the time integration scheme to obtain consistent results for particle-wall collisions. Furthermore, we account for polydisperse spherical particles and introduce new criteria to account for enduring contact, which occurs in many sediment transport situations. This is done without using arbitrary values for physically-defined parameters and by maintaining the full momentum balance of a particle in enduring contact. We validate our model against several test cases for binary particle-wall collisions as well as the collective motion of a sediment bed sheared by a viscous flow, yielding satisfactory agreement with experimental data by various authors.
NASA Astrophysics Data System (ADS)
Lerner, Paul; Marchal, Olivier; Lam, Phoebe J.; Anderson, Robert F.; Buesseler, Ken; Charette, Matthew A.; Edwards, R. Lawrence; Hayes, Christopher T.; Huang, Kuo-Fang; Lu, Yanbin; Robinson, Laura F.; Solow, Andrew
2016-07-01
Thorium is a highly particle-reactive element that possesses different measurable radio-isotopes in seawater, with well-constrained production rates and very distinct half-lives. As a result, Th has emerged as a key tracer for the cycling of marine particles and of their chemical constituents, including particulate organic carbon. Here two different versions of a model of Th and particle cycling in the ocean are tested using an unprecedented data set from station GT11-22 of the U.S. GEOTRACES North Atlantic Section: (i) 228,230,234Th activities of dissolved and particulate fractions, (ii) 228Ra activities, (iii) 234,238U activities estimated from salinity data and an assumed 234U/238U ratio, and (iv) particle concentrations, below a depth of 125 m. The two model versions assume a single class of particles but rely on different assumptions about the rate parameters for sorption reactions and particle processes: a first version (V1) assumes vertically uniform parameters (a popular description), whereas the second (V2) does not. Both versions are tested by fitting to the GT11-22 data using generalized nonlinear least squares and by analyzing residuals normalized to the data errors. We find that model V2 displays a significantly better fit to the data than model V1. Thus, the mere allowance of vertical variations in the rate parameters can lead to a significantly better fit to the data, without the need to modify the structure or add any new processes to the model. To understand how the better fit is achieved we consider two parameters, K =k1 /(k-1 +β-1) and K/P, where k1 is the adsorption rate constant, k-1 the desorption rate constant, β-1 the remineralization rate constant, and P the particle concentration. We find that the rate constant ratio K is large (⩾ 0.2) in the upper 1000 m and decreases to a nearly uniform value of ca. 0.12 below 2000 m, implying that the specific rate at which Th attaches to particles relative to that at which it is released from particles is higher in the upper ocean than in the deep ocean. In contrast, K/P increases with depth below 500 m. The parameters K and K/P display significant positive and negative monotonic relationship with P, respectively, which is collectively consistent with a particle concentration effect.
Montero-Chacón, Francisco; Cifuentes, Héctor; Medina, Fernando
2017-02-21
This work presents a lattice-particle model for the analysis of steel fiber-reinforced concrete (SFRC). In this approach, fibers are explicitly modeled and connected to the concrete matrix lattice via interface elements. The interface behavior was calibrated by means of pullout tests and a range for the bond properties is proposed. The model was validated with analytical and experimental results under uniaxial tension and compression, demonstrating the ability of the model to correctly describe the effect of fiber volume fraction and distribution on fracture properties of SFRC. The lattice-particle model was integrated into a hierarchical homogenization-based scheme in which macroscopic material parameters are obtained from mesoscale simulations. Moreover, a representative volume element (RVE) analysis was carried out and the results shows that such an RVE does exist in the post-peak regime and until localization takes place. Finally, the multiscale upscaling strategy was successfully validated with three-point bending tests.
Montero-Chacón, Francisco; Cifuentes, Héctor; Medina, Fernando
2017-01-01
This work presents a lattice–particle model for the analysis of steel fiber-reinforced concrete (SFRC). In this approach, fibers are explicitly modeled and connected to the concrete matrix lattice via interface elements. The interface behavior was calibrated by means of pullout tests and a range for the bond properties is proposed. The model was validated with analytical and experimental results under uniaxial tension and compression, demonstrating the ability of the model to correctly describe the effect of fiber volume fraction and distribution on fracture properties of SFRC. The lattice–particle model was integrated into a hierarchical homogenization-based scheme in which macroscopic material parameters are obtained from mesoscale simulations. Moreover, a representative volume element (RVE) analysis was carried out and the results shows that such an RVE does exist in the post-peak regime and until localization takes place. Finally, the multiscale upscaling strategy was successfully validated with three-point bending tests. PMID:28772568
Research on mining truck vibration control based on particle damping
NASA Astrophysics Data System (ADS)
Liming, Song; Wangqiang, Xiao; Zeguang, Li; Haiquan, Guo; Zhe, Yang
2018-03-01
More and more attentions were got by people about the research on mining truck driving comfort. As the vibration transfer terminal, cab is one of the important part of mining truck vibration control. In this paper, based on particle damping technology and its application characteristics, through the discrete element modeling, DEM & FEM coupling simulation and analysis, lab test verification and actual test in the truck, particle damping technology was successfully used in driver’s seat base of mining truck, cab vibration was reduced obviously, meanwhile applied research and method of particle damping technology in mining truck vibration control were provided.
DEM code-based modeling of energy accumulation and release in structurally heterogeneous rock masses
NASA Astrophysics Data System (ADS)
Lavrikov, S. V.; Revuzhenko, A. F.
2015-10-01
Based on discrete element method, the authors model loading of a physical specimen to describe its capacity to accumulate and release elastic energy. The specimen is modeled as a packing of particles with viscoelastic coupling and friction. The external elastic boundary of the packing is represented by particles connected by elastic springs. The latter means introduction of an additional special potential of interaction between the boundary particles, that exercises effect even when there is no direct contact between the particles. On the whole, the model specimen represents an element of a medium capable of accumulation of deformation energy in the form of internal stresses. The data of the numerical modeling of the physical specimen compression and the laboratory testing results show good qualitative consistency.
Particle-to-PFU ratio of Ebola virus influences disease course and survival in cynomolgus macaques.
Alfson, Kendra J; Avena, Laura E; Beadles, Michael W; Staples, Hilary; Nunneley, Jerritt W; Ticer, Anysha; Dick, Edward J; Owston, Michael A; Reed, Christopher; Patterson, Jean L; Carrion, Ricardo; Griffiths, Anthony
2015-07-01
This study addresses the role of Ebola virus (EBOV) specific infectivity in virulence. Filoviruses are highly lethal, enveloped, single-stranded negative-sense RNA viruses that can cause hemorrhagic fever. No approved vaccines or therapies exist for filovirus infections, and infectious virus must be handled in maximum containment. Efficacy testing of countermeasures, in addition to investigations of pathogenicity and immune response, often requires a well-characterized animal model. For EBOV, an obstacle in performing accurate disease modeling is a poor understanding of what constitutes an infectious dose in animal models. One well-recognized consequence of viral passage in cell culture is a change in specific infectivity, often measured as a particle-to-PFU ratio. Here, we report that serial passages of EBOV in cell culture resulted in a decrease in particle-to-PFU ratio. Notably, this correlated with decreased potency in a lethal cynomolgus macaque (Macaca fascicularis) model of infection; animals were infected with the same viral dose as determined by plaque assay, but animals that received more virus particles exhibited increased disease. This suggests that some particles are unable to form a plaque in a cell culture assay but are able to result in lethal disease in vivo. These results have a significant impact on how future studies are designed to model EBOV disease and test countermeasures. Ebola virus (EBOV) can cause severe hemorrhagic disease with a high case-fatality rate, and there are no approved vaccines or therapies. Specific infectivity can be considered the total number of viral particles per PFU, and its impact on disease is poorly understood. In stocks of most mammalian viruses, there are particles that are unable to complete an infectious cycle or unable to cause cell pathology in cultured cells. We asked if these particles cause disease in nonhuman primates by infecting monkeys with equal infectious doses of genetically identical stocks possessing either high or low specific infectivities. Interestingly, some particles that did not yield plaques in cell culture assays were able to result in lethal disease in vivo. Furthermore, the number of PFU needed to induce lethal disease in animals was very low. Our results have a significant impact on how future studies are designed to model EBOV disease and test countermeasures.
An experimental and theoretical investigation of deposition patterns from an agricultural airplane
NASA Technical Reports Server (NTRS)
Morris, D. J.; Croom, C. C.; Vandam, C. P.; Holmes, B. J.
1984-01-01
A flight test program has been conducted with a representative agricultural airplane to provide data for validating a computer program model which predicts aerially applied particle deposition. Test procedures and the data from this test are presented and discussed. The computer program features are summarized, and comparisons of predicted and measured particle deposition are presented. Applications of the computer program for spray pattern improvement are illustrated.
Cederwall, R T; Peterson, K R
1990-11-01
A three-dimensional atmospheric transport and diffusion model is used to calculate the arrival and deposition of fallout from 13 selected nuclear tests at the Nevada Test Site (NTS) in the 1950s. Results are used to extend NTS fallout patterns to intermediate downwind distances (300 to 1200 km). The radioactive cloud is represented in the model by a population of Lagrangian marker particles, with concentrations calculated on an Eulerian grid. Use of marker particles, with fall velocities dependent on particle size, provides a realistic simulation of fallout as the debris cloud travels downwind. The three-dimensional wind field is derived from observed data, adjusted for mass consistency. Terrain is represented in the grid, which extends up to 1200 km downwind of NTS and has 32-km horizontal resolution and 1-km vertical resolution. Ground deposition is calculated by a deposition-velocity approach. Source terms and relationships between deposition and exposure rate are based on work by Hicks. Uncertainty in particle size and vertical distributions within the debris cloud (and stem) allow for some model "tuning" to better match measured ground-deposition values. Particle trajectories representing different sizes and starting heights above ground zero are used to guide source specification. An hourly time history of the modeled fallout pattern as the debris cloud moves downwind provides estimates of fallout arrival times. Results for event HARRY illustrate the methodology. The composite deposition pattern for all 13 tests is characterized by two lobes extending out to the north-northeast and east-northeast, respectively, at intermediate distances from NTS. Arrival estimates, along with modeled deposition values, augment measured deposition data in the development of data bases at the county level; these data bases are used for estimating radiation exposure at intermediate distances downwind of NTS. Results from a study of event TRINITY are also presented.
ERIC Educational Resources Information Center
Nair, Priya; Ankeny, Casey J.; Ryan, Justin; Okcay, Murat; Frakes, David H.
2016-01-01
We investigated the use of a new system, HemoFlow™, which utilizes state of the art technologies such as particle image velocimetry to test endovascular devices as part of an undergraduate biomedical engineering curriculum. Students deployed an endovascular stent into an anatomical model of a cerebral aneurysm and measured intra-aneurysmal flow…
NASA Astrophysics Data System (ADS)
Ozdemir, Ozan C.; Widener, Christian A.; Carter, Michael J.; Johnson, Kyle W.
2017-10-01
As the industrial application of the cold spray technology grows, the need to optimize both the cost and the quality of the process grows with it. Parameter selection techniques available today require the use of a coupled system of equations to be solved to involve the losses due to particle loading in the gas stream. Such analyses cause a significant increase in the computational time in comparison with calculations with isentropic flow assumptions. In cold spray operations, engineers and operators may, therefore, neglect the effects of particle loading to simplify the multiparameter optimization process. In this study, two-way coupled (particle-fluid) quasi-one-dimensional fluid dynamics simulations are used to test the particle loading effects under many potential cold spray scenarios. Output of the simulations is statistically analyzed to build regression models that estimate the changes in particle impact velocity and temperature due to particle loading. This approach eases particle loading optimization for more complete analysis on deposition cost and time. The model was validated both numerically and experimentally. Further numerical analyses were completed to test the particle loading capacity and limitations of a nozzle with a commonly used throat size. Additional experimentation helped document the physical limitations to high-rate deposition.
Testing particle filters on convective scale dynamics
NASA Astrophysics Data System (ADS)
Haslehner, Mylene; Craig, George. C.; Janjic, Tijana
2014-05-01
Particle filters have been developed in recent years to deal with highly nonlinear dynamics and non Gaussian error statistics that also characterize data assimilation on convective scales. In this work we explore the use of the efficient particle filter (P.v. Leeuwen, 2011) for convective scale data assimilation application. The method is tested in idealized setting, on two stochastic models. The models were designed to reproduce some of the properties of convection, for example the rapid development and decay of convective clouds. The first model is a simple one-dimensional, discrete state birth-death model of clouds (Craig and Würsch, 2012). For this model, the efficient particle filter that includes nudging the variables shows significant improvement compared to Ensemble Kalman Filter and Sequential Importance Resampling (SIR) particle filter. The success of the combination of nudging and resampling, measured as RMS error with respect to the 'true state', is proportional to the nudging intensity. Significantly, even a very weak nudging intensity brings notable improvement over SIR. The second model is a modified version of a stochastic shallow water model (Würsch and Craig 2013), which contains more realistic dynamical characteristics of convective scale phenomena. Using the efficient particle filter and different combination of observations of the three field variables (wind, water 'height' and rain) allows the particle filter to be evaluated in comparison to a regime where only nudging is used. Sensitivity to the properties of the model error covariance is also considered. Finally, criteria are identified under which the efficient particle filter outperforms nudging alone. References: Craig, G. C. and M. Würsch, 2012: The impact of localization and observation averaging for convective-scale data assimilation in a simple stochastic model. Q. J. R. Meteorol. Soc.,139, 515-523. Van Leeuwen, P. J., 2011: Efficient non-linear data assimilation in geophysical fluid dynamics. - Computers and Fluids, doi:10,1016/j.compfluid.2010.11.011, 1096 2011. Würsch, M. and G. C. Craig, 2013: A simple dynamical model of cumulus convection for data assimilation research, submitted to Met. Zeitschrift.
The structure of evaporating and combusting sprays: Measurements and predictions
NASA Technical Reports Server (NTRS)
Shuen, J. S.; Solomon, A. S. P.; Faeth, F. M.
1983-01-01
The structure of particle-laden jets and nonevaporating and evaporating sprays was measured in order to evaluate models of these processes. Three models are being evaluated: (1) a locally homogeneous flow model, where slip between the phases is neglected and the flow is assumed to be in local thermodynamic equilibrium; (2) a deterministic separated flow model, where slip and finite interphase transport rates are considered but effects of particle/drop dispersion by turbulence and effects of turbulence on interphase transport rates are ignored; and (3) a stochastic separated flow model, where effects of interphase slip, turbulent dispersion and turbulent fluctuations are considered using random sampling for turbulence properties in conjunction with random-walk computations for particle motion. All three models use a k-e-g turbulence model. All testing and data reduction are completed for the particle laden jets. Mean and fluctuating velocities of the continuous phase and mean mixture fraction were measured in the evaporating sprays.
Characterization of Cement Thickening Time Properties and Modeling of Thickening Time
NASA Astrophysics Data System (ADS)
Coryell, Tyler Neil
A comprehensive way of modelling cement thickening time, as applied in the oil field, has never been created which incorporates all the properties internal to the cement design. To address this issue different variables were tested for; including barite particle size, Hydroxyethylcellulose (HEC) concentration, age or exposure of the cement to humidity, downhole temperature, and the particle size of the cement. Barite particle size was shown to have no significant effect on thickening time. Age of the sample was also shown to have no significant effect on thickening time, at least for our storage conditions in the laboratory. The testing for nano cement particles currently shows that there is the possibility that the smaller particles can increase thickening time. While such a result is not absent from other works, it is unusual. Due to the lack of conclusive evidence for nano particle cement, the work as it currently stands is included but not taken it into consideration for our models. The temperature downhole and the HEC concentration are used to create our models. With this research, it is shown that creating a numerical model is a practical investment in our future understanding of cement’s field use. Three model systems are used, the first uses equations for predicting the time when thickening first begins and the thickness at that time. In the second equation set, the rate of change that can be expected is used to find curvature to define the acceleration. The third model improves on some scatter that could not be controlled in the second model by using the first derivative to find the point of maximum slope and the time it occurs. By using this maximum slope point, the ‘pumpable’ time of the cement before it thickens can be estimated. All the models can be used in tandem to describe the cement thickening process. However, the most accurate system is using the first model with the third model, i.e. using the direct model for when acceleration begins and the first derivative model to find the end of the thickening time. All the models can be extended in future work to include a broader test matrix and can be extended to include other chemical additives for the base cement.
NASA Astrophysics Data System (ADS)
Barnes, Brian C.; Leiter, Kenneth W.; Becker, Richard; Knap, Jaroslaw; Brennan, John K.
2017-07-01
We describe the development, accuracy, and efficiency of an automation package for molecular simulation, the large-scale atomic/molecular massively parallel simulator (LAMMPS) integrated materials engine (LIME). Heuristics and algorithms employed for equation of state (EOS) calculation using a particle-based model of a molecular crystal, hexahydro-1,3,5-trinitro-s-triazine (RDX), are described in detail. The simulation method for the particle-based model is energy-conserving dissipative particle dynamics, but the techniques used in LIME are generally applicable to molecular dynamics simulations with a variety of particle-based models. The newly created tool set is tested through use of its EOS data in plate impact and Taylor anvil impact continuum simulations of solid RDX. The coarse-grain model results from LIME provide an approach to bridge the scales from atomistic simulations to continuum simulations.
Shear test on viscoelastic granular material using Contact Dynamics simulations
NASA Astrophysics Data System (ADS)
Quezada, Juan Carlos; Sagnol, Loba; Chazallon, Cyrille
2017-06-01
By means of 3D contact dynamic simulations, the behavior of a viscoelastic granular material under shear loading is investigated. A viscoelastic fluid phase surrounding the solid particles is simulated by a contact model acting between them. This contact law was implemented in the LMGC90 software, based on the Burgers model. This model is able to simulate also the effect of creep relaxation. To validate the proposed contact model, several direct shear tests were performed, experimentally and numerically using the Leutner device. The numerical samples were created using spheres with two particle size distribution, each one identified for two layers from a road structure. Our results show a reasonable agreement between experimental and numerical data regarding the strain-stress evolution curves and the stress levels measured at failure. The proposed model can be used to simulate the mechanical behavior of multi-layer road structure and to study the influence of traffic on road deformation, cracking and particles pull-out induced by traffic loading.
Source-receptor matrix calculation with a Lagrangian particle dispersion model in backward mode
NASA Astrophysics Data System (ADS)
Seibert, P.; Frank, A.
2003-04-01
A method for the calculation of source-receptor (s-r) relationships (sensitivity of a trace substance concentration at some place and time to emission at some place and time) with Lagrangian particle models has been derived and presented previously (Air Pollution Modeling and its Application XIV, Proc. of ITM Boulder 2000). Now, the generalisation to any linear s-r relationship, including dry and wet deposition, decay etc., is presented. It was implemented in the model FLEXPART and tested extensively in idealised set-ups. These tests turned out to be very useful for finding minor model bugs and inaccuracies, and can be recommended generally for model testing. Recently, a convection scheme has been integrated in FLEXPART which was also tested. Both source and receptor can be specified in mass mixing ratio or mass units. Properly taking care of this is quite relevant for sources and receptors at different levels in the atmosphere. Furthermore, we present a test with the transport of aerosol-bound Caesium-137 from the areas contaminated by the Chernobyl disaster to Stockholm during one month.
Ice Particle Analysis of the Honeywell AL502 Engine Booster
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.; Rigby, David L.
2015-01-01
A flow and ice particle trajectory analysis was performed for the booster of the Honeywell ALF502 engine. The analysis focused on two closely related conditions one of which produced an icing event and another which did not during testing of the ALF502 engine in the Propulsion Systems Lab (PSL) at NASA Glenn Research Center. The flow analysis was generated using the NASA Glenn GlennHT flow solver and the particle analysis was generated using the NASA Glenn LEWICE3D v3.63 ice accretion software. The inflow conditions for the two conditions were similar with the main differences being that the condition that produced the icing event was 6.8 K colder than the non-icing event case and the inflow ice water content (IWC) for the non-icing event case was 50% less than for the icing event case. The particle analysis, which considered sublimation, evaporation and phase change, was generated for a 5 micron ice particle with a sticky impact model and for a 24 micron median volume diameter (MVD), 7 bin ice particle distribution with a supercooled large droplet (SLD) splash model used to simulate ice particle breakup. The particle analysis did not consider the effect of the runback and re-impingement of water resulting from the heated spinner and anti-icing system. The results from the analysis showed that the amount of impingement for the components were similar for the same particle size and impact model for the icing and non-icing event conditions. This was attributed to the similar aerodynamic conditions in the booster for the two cases. The particle temperature and melt fraction were higher at the same location and particle size for the non-icing event than for the icing event case due to the higher incoming inflow temperature for the non-event case. The 5 micron ice particle case produced higher impact temperatures and higher melt fractions on the components downstream of the fan than the 24 micron MVD case because the average particle size generated by the particle breakup was larger than 5 microns which yielded less warming and melting. The analysis also showed that the melt fraction and wet bulb temperature icing criterion developed during tests in the Research Altitude Test Facility (RATFac) at the National Research Council (NRC) of Canada were useful in predicting icing events in the ALF502 engine. The development of an ice particle impact model which includes the effects of particle breakup, phase change, and surface state is necessary to further improve the prediction of ice particle transport with phase change through turbomachinery.
NASA Astrophysics Data System (ADS)
Liu, D.; Fu, X.; Liu, X.
2016-12-01
In nature, granular materials exist widely in water bodies. Understanding the fundamentals of solid-liquid two-phase flow, such as turbulent sediment-laden flow, is of importance for a wide range of applications. A coupling method combining computational fluid dynamics (CFD) and discrete element method (DEM) is now widely used for modeling such flows. In this method, when particles are significantly larger than the CFD cells, the fluid field around each particle should be fully resolved. On the other hand, the "unresolved" model is designed for the situation where particles are significantly smaller than the mesh cells. Using "unresolved" model, large amount of particles can be simulated simultaneously. However, there is a gap between these two situations when the size of DEM particles and CFD cell is in the same order of magnitude. In this work, the most commonly used void fraction models are tested with numerical sedimentation experiments. The range of applicability for each model is presented. Based on this, a new void fraction model, i.e., a modified version of "tri-linear" model, is proposed. Particular attention is paid to the smooth function of void fraction in order to avoid numerical instability. The results show good agreement with the experimental data and analytical solution for both single-particle motion and also group-particle motion, indicating great potential of the new void fraction model.
NASA Astrophysics Data System (ADS)
Alizadeh Behjani, Mohammadreza; Hassanpour, Ali; Ghadiri, Mojtaba; Bayly, Andrew
2017-06-01
Segregation of granules is an undesired phenomenon in which particles in a mixture separate from each other based on the differences in their physical and chemical properties. It is, therefore, crucial to control the homogeneity of the system by applying appropriate techniques. This requires a fundamental understanding of the underlying mechanisms. In this study, the effect of particle shape and cohesion has been analysed. As a model system prone to segregation, a ternary mixture of particles representing the common ingredients of home washing powders, namely, spray dried detergent powders, tetraacetylethylenediamine, and enzyme placebo (as the minor ingredient) during heap formation is modelled numerically by the Discrete Element Method (DEM) with an aim to investigate the effect of cohesion/adhesion of the minor components on segregation quality. Non-spherical particle shapes are created in DEM using the clumped-sphere method based on their X-ray tomograms. Experimentally, inter particle adhesion is generated by coating the minor ingredient (enzyme placebo) with Polyethylene Glycol 400 (PEG 400). The JKR theory is used to model the cohesion/adhesion of coated enzyme placebo particles in the simulation. Tests are carried out experimentally and simulated numerically by mixing the placebo particles (uncoated and coated) with the other ingredients and pouring them in a test box. The simulation and experimental results are compared qualitatively and quantitatively. It is found that coating the minor ingredient in the mixture reduces segregation significantly while the change in flowability of the system is negligible.
Observational physics of mirror world
NASA Technical Reports Server (NTRS)
Khlopov, M. YA.; Beskin, G. M.; Bochkarev, N. E.; Pustilnik, L. A.; Pustilnik, S. A.
1989-01-01
The existence of the whole world of shadow particles, interacting with each other and having no mutual interactions with ordinary particles except gravity is a specific feature of modern superstring models, being considered as models of the theory of everything. The presence of shadow particles is the necessary condition in the superstring models, providing compensation of the asymmetry of left and right chirality states of ordinary particles. If compactification of additional dimensions retains the symmetry of left and right states, shadow world turns to be the mirror one, with particles and fields having properties strictly symmetrical to the ones of corresponding ordinary particles and fields. Owing to the strict symmetry of physical laws for ordinary and mirror particles, the analysis of cosmological evolution of mirror matter provides rather definite conclusions on possible effects of mirror particles in the universe. A general qualitative discussion of possible astronomical impact of mirror matter is given, in order to make as wide as possible astronomical observational searches for the effects of mirror world, being the unique way to test the existence of mirror partners of ordinary particles in the Nature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collin, Blaise P.; Petti, David A.; Demkowicz, Paul A.
Safety tests were conducted on fuel compacts from AGR-1, the first irradiation experiment of the Advanced Gas Reactor (AGR) Fuel Development and Qualification program, at temperatures ranging from 1600 to 1800 °C to determine fission product release at temperatures that bound reactor accident conditions. The PARFUME (PARticle FUel ModEl) code was used to predict the release of fission products silver, cesium, strontium, and krypton from fuel compacts containing tristructural isotropic (TRISO) coated particles during 15 of these safety tests. Comparisons between PARFUME predictions and post-irradiation examination results of the safety tests were conducted on two types of AGR-1 compacts: compactsmore » containing only intact particles and compacts containing one or more particles whose SiC layers failed during safety testing. In both cases, PARFUME globally over-predicted the experimental release fractions by several orders of magnitude: more than three (intact) and two (failed SiC) orders of magnitude for silver, more than three and up to two orders of magnitude for strontium, and up to two and more than one orders of magnitude for krypton. The release of cesium from intact particles was also largely over-predicted (by up to five orders of magnitude) but its release from particles with failed SiC was only over-predicted by a factor of about 3. These over-predictions can be largely attributed to an over-estimation of the diffusivities used in the modeling of fission product transport in TRISO-coated particles. The integral release nature of the data makes it difficult to estimate the individual over-estimations in the kernel or each coating layer. Nevertheless, a tentative assessment of correction factors to these diffusivities was performed to enable a better match between the modeling predictions and the safety testing results. The method could only be successfully applied to silver and cesium. In the case of strontium, correction factors could not be assessed because potential release during the safety tests could not be distinguished from matrix content released during irradiation. Furthermore, in the case of krypton, all the coating layers are partly retentive and the available data did not allow the level of retention in individual layers to be determined, hence preventing derivation of any correction factors.« less
Collin, Blaise P.; Petti, David A.; Demkowicz, Paul A.; ...
2016-04-07
Safety tests were conducted on fuel compacts from AGR-1, the first irradiation experiment of the Advanced Gas Reactor (AGR) Fuel Development and Qualification program, at temperatures ranging from 1600 to 1800 °C to determine fission product release at temperatures that bound reactor accident conditions. The PARFUME (PARticle FUel ModEl) code was used to predict the release of fission products silver, cesium, strontium, and krypton from fuel compacts containing tristructural isotropic (TRISO) coated particles during 15 of these safety tests. Comparisons between PARFUME predictions and post-irradiation examination results of the safety tests were conducted on two types of AGR-1 compacts: compactsmore » containing only intact particles and compacts containing one or more particles whose SiC layers failed during safety testing. In both cases, PARFUME globally over-predicted the experimental release fractions by several orders of magnitude: more than three (intact) and two (failed SiC) orders of magnitude for silver, more than three and up to two orders of magnitude for strontium, and up to two and more than one orders of magnitude for krypton. The release of cesium from intact particles was also largely over-predicted (by up to five orders of magnitude) but its release from particles with failed SiC was only over-predicted by a factor of about 3. These over-predictions can be largely attributed to an over-estimation of the diffusivities used in the modeling of fission product transport in TRISO-coated particles. The integral release nature of the data makes it difficult to estimate the individual over-estimations in the kernel or each coating layer. Nevertheless, a tentative assessment of correction factors to these diffusivities was performed to enable a better match between the modeling predictions and the safety testing results. The method could only be successfully applied to silver and cesium. In the case of strontium, correction factors could not be assessed because potential release during the safety tests could not be distinguished from matrix content released during irradiation. Furthermore, in the case of krypton, all the coating layers are partly retentive and the available data did not allow the level of retention in individual layers to be determined, hence preventing derivation of any correction factors.« less
An in vitro analysis of a carotid artery stent with a protective porous membrane.
Müller-Hülsbeck, Stefan; Hüsler, Erhard J; Schaffner, Silvio R; Jahnke, Thomas; Glass, Christoph; Wenke, Rüdiger; Heller, Martin
2004-11-01
To prove the effectiveness of a new stent concept with integrated protection (MembraX [MX]) by comparing it with five cerebral protection devices designed for carotid angioplasty in an in vitro model. Two simulation series of embolization from carotid angioplasty have been performed. In the first series, polyvinyl-alcohol particles (150-250 microm [small], 355-500 microm [medium], 710-1000 microm [large]; 5 mg each) were injected into a silicone flow model simulating the aortic arch with a carotid bifurcation. The particles were injected proximally to the partially deployed MX stent or one of the following protection devices: Angioguard (AG), FilterWire EX (EX), Trap, Neuroshield (NS), or GuardWire Plus (GW). Particles evading the protection device were caught in a filter at the end of the flow model and weighed. In the second series, human plaque material (8-12 particles; total weight 6.09 +/- 0.01 mg; 500-1500 microm) was injected into the model with the respective devices. MX was compared with the AG, EX, Trap, and NS devices. MX had the most effective overall filtration performance for polyvinyl alcohol particles in the effluent of the internal carotid artery (ICA; 0.43 mg, 2.9%), compared with NS (0.53 mg, 3.5%), GW (1.10 mg, 7.0%), EX and AG (1.18 and 1.21 mg, respectively; 7.8% and 8.0%), and Trap (1.24 mg, 8.2%). MX performed best for the small particles (2.0% passed particles into ICA; P < .05 compared with all). Human plaque material was retained best in the in vitro model by MX (0.0%), followed by NS (0.8%), EX (1.3%), Trap (2.6%), and AG (4.4%). In vitro, none of the tested devices had the ability to prevent embolization completely. Comparing current designs, the MX device captured the highest percentage of the three different particle groups. Tested with human plaque emboli, MX performed effectively in filtering the particles in the ICA.
NASA Astrophysics Data System (ADS)
Luo, D. M.; Xie, Y.; Su, X. R.; Zhou, Y. L.
2018-01-01
Based on the four classical models of Mooney-Rivlin (M-R), Yeoh, Ogden and Neo-Hookean (N-H) model, a strain energy constitutive equation with large deformation for rubber composites reinforced with random ceramic particles is proposed from the angle of continuum mechanics theory in this paper. By decoupling the interaction between matrix and random particles, the strain energy of each phase is obtained to derive the explicit constitutive equation for rubber composites. The tests results of uni-axial tensile, pure shear and equal bi-axial tensile are simulated by the non-linear finite element method on the ANSYS platform. The results from finite element method are compared with those from experiment, and the material parameters are determined by fitting the results from different test conditions, and the influence of radius of random ceramic particles on the effective mechanical properties are analyzed.
Tracking and people counting using Particle Filter Method
NASA Astrophysics Data System (ADS)
Sulistyaningrum, D. R.; Setiyono, B.; Rizky, M. S.
2018-03-01
In recent years, technology has developed quite rapidly, especially in the field of object tracking. Moreover, if the object under study is a person and the number of people a lot. The purpose of this research is to apply Particle Filter method for tracking and counting people in certain area. Tracking people will be rather difficult if there are some obstacles, one of which is occlusion. The stages of tracking and people counting scheme in this study include pre-processing, segmentation using Gaussian Mixture Model (GMM), tracking using particle filter, and counting based on centroid. The Particle Filter method uses the estimated motion included in the model used. The test results show that the tracking and people counting can be done well with an average accuracy of 89.33% and 77.33% respectively from six videos test data. In the process of tracking people, the results are good if there is partial occlusion and no occlusion
A class of ejecta transport test problems
NASA Astrophysics Data System (ADS)
Oro, David M.; Hammerberg, J. E.; Buttler, William T.; Mariam, Fesseha G.; Morris, Christopher L.; Rousculp, Chris; Stone, Joseph B.
2012-03-01
Hydro code implementations of ejecta dynamics at shocked interfaces presume a source distribution function of particulate masses and velocities, f0(m,u;t). Some properties of this source distribution function have been determined from Taylor- and supported-shockwave experiments. Such experiments measure the mass moment of f0 under vacuum conditions assuming weak particle-particle interactions and, usually, fully inelastic scattering (capture) of ejecta particles from piezoelectric diagnostic probes. Recently, planar ejection of W particles into vacuum, Ar, and Xe gas atmospheres have been carried out to provide benchmark transport data for transport model development and validation. We present those experimental results and compare them with modeled transport of the W-ejecta particles in Ar and Xe.
A simple dynamic subgrid-scale model for LES of particle-laden turbulence
NASA Astrophysics Data System (ADS)
Park, George Ilhwan; Bassenne, Maxime; Urzay, Javier; Moin, Parviz
2017-04-01
In this study, a dynamic model for large-eddy simulations is proposed in order to describe the motion of small inertial particles in turbulent flows. The model is simple, involves no significant computational overhead, contains no adjustable parameters, and is flexible enough to be deployed in any type of flow solvers and grids, including unstructured setups. The approach is based on the use of elliptic differential filters to model the subgrid-scale velocity. The only model parameter, which is related to the nominal filter width, is determined dynamically by imposing consistency constraints on the estimated subgrid energetics. The performance of the model is tested in large-eddy simulations of homogeneous-isotropic turbulence laden with particles, where improved agreement with direct numerical simulation results is observed in the dispersed-phase statistics, including particle acceleration, local carrier-phase velocity, and preferential-concentration metrics.
Kinetic model for the mechanical response of suspensions of sponge-like particles.
Hütter, Markus; Faber, Timo J; Wyss, Hans M
2012-01-01
A dynamic two-scale model is developed that describes the stationary and transient mechanical behavior of concentrated suspensions made of highly porous particles. Particularly, we are interested in particles that not only deform elastically, but also can swell or shrink by taking up or expelling the viscous solvent from their interior, leading to rate-dependent deformability of the particles. The fine level of the model describes the evolution of particle centers and their current sizes, while the shapes are at present not taken into account. The versatility of the model permits inclusion of density- and temperature-dependent particle interactions, and hydrodynamic interactions, as well as to implement insight into the mechanism of swelling and shrinking. The coarse level of the model is given in terms of macroscopic hydrodynamics. The two levels are mutually coupled, since the flow changes the particle configuration, while in turn the configuration gives rise to stress contributions, that eventually determine the macroscopic mechanical properties of the suspension. Using a thermodynamic procedure for the model development, it is demonstrated that the driving forces for position change and for size change are derived from the same potential energy. The model is translated into a form that is suitable for particle-based Brownian dynamics simulations for performing rheological tests. Various possibilities for connection with experiments, e.g. rheological and structural, are discussed.
NASA Technical Reports Server (NTRS)
Goguen, Jay D.
1993-01-01
To test the hypothesis that the independent scattering calculation widely used to model radiative transfer in atmospheres and clouds will give a useful approximation to the intensity and linear polarization of visible light scattered from an optically thick surface of transparent particles, laboratory measurements are compared to the independent scattering calculation for a surface of spherical particles with known optical constants and size distribution. Because the shape, size distribution, and optical constants of the particles are known, the independent scattering calculation is completely determined and the only remaining unknown is the net effect of the close packing of the particles in the laboratory sample surface...
NASA Astrophysics Data System (ADS)
Lee, Eon S.; Polidori, Andrea; Koch, Michael; Fine, Philip M.; Mehadi, Ahmed; Hammond, Donald; Wright, Jeffery N.; Miguel, Antonio. H.; Ayala, Alberto; Zhu, Yifang
2013-04-01
This study compares the instrumental performance of three TSI water-based condensation particle counter (WCPC) models measuring particle number concentrations in close proximity (15 m) to a major freeway that has a significant level of heavy-duty diesel traffic. The study focuses on examining instrument biases and performance differences by different WCPC models under realistic field operational conditions. Three TSI models (3781, 3783, and 3785) were operated for one month in triplicate (nine units in total) in parallel with two sets of Scanning Mobility Particle Sizer (SMPS) spectrometers for the concurrent measurement of particle size distributions. Inter-model bias under different wind directions were first evaluated using 1-min raw data. Although all three WCPC models agreed well in upwind conditions (lower particle number concentrations, in the range of 103-104 particles cm-3), the three models' responses were significantly different under downwind conditions (higher particle number concentrations, above 104 particles cm-3). In an effort to increase inter-model linear correlations, we evaluated the results of using longer averaging time intervals. An averaging time of at least 15 min was found to achieve R2 values of 0.96 or higher when comparing all three models. Similar results were observed for intra-model comparisons for each of the three models. This strong linear relationship helped identify instrument bias related to particle number concentrations and particle size distributions. The TSI 3783 produced the highest particle counts, followed by TSI 3785, which reported 11% lower during downwind conditions than TSI 3783. TSI 3781 recorded particle number concentrations that were 24% lower than those observed by TSI 3783 during downwind condition. We found that TSI 3781 underestimated particles with a count median diameter less than 45 nm. Although the particle size dependency of instrument performance was found the most significant in TSI 3781, both models 3783 and 3785 showed somewhat size dependency. In addition, within each tested WCPC model, one unit was found to count significantly different and be more sensitive to particle size than the other two. Finally, exponential regression analysis was used to numerically quantify instrumental inter-model bias. Correction equations are proposed to adjust the TSI 3781 and 3785 data to the most recent model TSI 3783.
1996-06-10
The dart and associated launching system was developed by engineers at MSFC to collect a sample of the aluminum oxide particles during the static fire testing of the Shuttle's solid rocket motor. The dart is launched through the exhaust and recovered post test. The particles are collected on sticky copper tapes affixed to a cylindrical shaft in the dart. A protective sleeve draws over the tape after the sample is collected to prevent contamination. The sample is analyzed under a scarning electron microscope under high magnification and a particle size distribution is determined. This size distribution is input into the analytical model to predict the radiative heating rates from the motor exhaust. Good prediction models are essential to optimizing the development of the thermal protection system for the Shuttle.
NASA Technical Reports Server (NTRS)
Sapyta, Joe; Reid, Hank; Walton, Lew
1993-01-01
The topics are presented in viewgraph form and include the following: particle bed reactor (PBR) core cross section; PBR bleed cycle; fuel and moderator flow paths; PBR modeling requirements; characteristics of PBR and nuclear thermal propulsion (NTP) modeling; challenges for PBR and NTP modeling; thermal hydraulic computer codes; capabilities for PBR/reactor application; thermal/hydralic codes; limitations; physical correlations; comparison of predicted friction factor and experimental data; frit pressure drop testing; cold frit mask factor; decay heat flow rate; startup transient simulation; and philosophy of systems modeling.
Formulation of the Multi-Hit Model With a Non-Poisson Distribution of Hits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vassiliev, Oleg N., E-mail: Oleg.Vassiliev@albertahealthservices.ca
2012-07-15
Purpose: We proposed a formulation of the multi-hit single-target model in which the Poisson distribution of hits was replaced by a combination of two distributions: one for the number of particles entering the target and one for the number of hits a particle entering the target produces. Such an approach reflects the fact that radiation damage is a result of two different random processes: particle emission by a radiation source and interaction of particles with matter inside the target. Methods and Materials: Poisson distribution is well justified for the first of the two processes. The second distribution depends on howmore » a hit is defined. To test our approach, we assumed that the second distribution was also a Poisson distribution. The two distributions combined resulted in a non-Poisson distribution. We tested the proposed model by comparing it with previously reported data for DNA single- and double-strand breaks induced by protons and electrons, for survival of a range of cell lines, and variation of the initial slopes of survival curves with radiation quality for heavy-ion beams. Results: Analysis of cell survival equations for this new model showed that they had realistic properties overall, such as the initial and high-dose slopes of survival curves, the shoulder, and relative biological effectiveness (RBE) In most cases tested, a better fit of survival curves was achieved with the new model than with the linear-quadratic model. The results also suggested that the proposed approach may extend the multi-hit model beyond its traditional role in analysis of survival curves to predicting effects of radiation quality and analysis of DNA strand breaks. Conclusions: Our model, although conceptually simple, performed well in all tests. The model was able to consistently fit data for both cell survival and DNA single- and double-strand breaks. It correctly predicted the dependence of radiation effects on parameters of radiation quality.« less
Shears, Tara
2012-02-28
The Standard Model is the theory used to describe the interactions between fundamental particles and fundamental forces. It is remarkably successful at predicting the outcome of particle physics experiments. However, the theory has not yet been completely verified. In particular, one of the most vital constituents, the Higgs boson, has not yet been observed. This paper describes the Standard Model, the experimental tests of the theory that have led to its acceptance and its shortcomings.
Variational Algorithms for Test Particle Trajectories
NASA Astrophysics Data System (ADS)
Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.
2015-11-01
The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.
NASA Astrophysics Data System (ADS)
Ambroglini, Filippo; Jerome Burger, William; Battiston, Roberto; Vitale, Vincenzo; Zhang, Yu
2014-05-01
During last decades, few space experiments revealed anomalous bursts of charged particles, mainly electrons with energy larger than few MeV. A possible source of these bursts are the low-frequency seismo-electromagnetic emissions, which can cause the precipitation of the electrons from the lower boundary of their inner belt. Studies of these bursts reported also a short-term pre-seismic excess. Starting from simulation tools traditionally used on high energy physics we developed a dedicated application SEPS (Space Perturbation Earthquake Simulation), based on the Geant4 tool and PLANETOCOSMICS program, able to model and simulate the electromagnetic interaction between the earthquake and the particles trapped in the inner Van Allen belt. With SEPS one can study the transport of particles trapped in the Van Allen belts through the Earth's magnetic field also taking into account possible interactions with the Earth's atmosphere. SEPS provides the possibility of: testing different models of interaction between electromagnetic waves and trapped particles, defining the mechanism of interaction as also shaping the area in which this takes place,assessing the effects of perturbations in the magnetic field on the particles path, performing back-tracking analysis and also modelling the interaction with electric fields. SEPS is in advanced development stage, so that it could be already exploited to test in details the results of correlation analysis between particle bursts and earthquakes based on NOAA and SAMPEX data. The test was performed both with a full simulation analysis, (tracing from the position of the earthquake and going to see if there were paths compatible with the burst revealed) and with a back-tracking analysis (tracing from the burst detection point and checking the compatibility with the position of associated earthquake).
Experimental determination of the oral bioavailability and bioaccessibility of lead particles
2012-01-01
In vivo estimations of Pb particle bioavailability are costly and variable, because of the nature of animal assays. The most feasible alternative for increasing the number of investigations carried out on Pb particle bioavailability is in vitro testing. This testing method requires calibration using in vivo data on an adapted animal model, so that the results will be valid for childhood exposure assessment. Also, the test results must be reproducible within and between laboratories. The Relative Bioaccessibility Leaching Procedure, which is calibrated with in vivo data on soils, presents the highest degree of validation and simplicity. This method could be applied to Pb particles, including those in paint and dust, and those in drinking water systems, which although relevant, have been poorly investigated up to now for childhood exposure assessment. PMID:23173867
NASA Astrophysics Data System (ADS)
McCullough, R. R.; Jordon, J. B.; Brammer, A. T.; Manigandan, K.; Srivatsan, T. S.; Allison, P. G.; Rushing, T. W.
2014-01-01
In this paper, the use of a microstructure-sensitive fatigue model is put forth for the analysis of discontinuously reinforced aluminum alloy metal matrix composite. The fatigue model was used for a ceramic particle-reinforced aluminum alloy deformed under conditions of fully reversed strain control. Experimental results revealed the aluminum alloy to be strongly influenced by volume fraction of the particulate reinforcement phase under conditions of strain-controlled fatigue. The model safely characterizes the evolution of fatigue damage in this aluminum alloy composite into the distinct stages of crack initiation and crack growth culminating in failure. The model is able to capture the specific influence of particle volume fraction, particle size, and nearest neighbor distance in quantifying fatigue life. The model yields good results for correlation of the predicted results with the experimental test results on the fatigue behavior of the chosen aluminum alloy for two different percentages of the ceramic particle reinforcement. Further, the model illustrates that both particle size and volume fraction are key factors that govern fatigue lifetime. This conclusion is well supported by fractographic observations of the cyclically deformed and failed specimens.
This paper addresses the need for detailed chemical information on the fine particulate matter (PM2.5) generated by commercial aviation engines. The exhaust plumes of nine engine models were sampled during the three test campaigns of the Aircraft Particle Emissions eXperiment (AP...
Modeling study of deposition locations in the 291-Z plenum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahoney, L.A.; Glissmeyer, J.A.
The TEMPEST (Trent and Eyler 1991) and PART5 computer codes were used to predict the probable locations of particle deposition in the suction-side plenum of the 291-Z building in the 200 Area of the Hanford Site, the exhaust fan building for the 234-5Z, 236-Z, and 232-Z buildings in the 200 Area of the Hanford Site. The Tempest code provided velocity fields for the airflow through the plenum. These velocity fields were then used with TEMPEST to provide modeling of near-floor particle concentrations without particle sticking (100% resuspension). The same velocity fields were also used with PART5 to provide modeling ofmore » particle deposition with sticking (0% resuspension). Some of the parameters whose importance was tested were particle size, point of injection and exhaust fan configuration.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Wei; DeCroix, David; Sun, Xin
The attrition of particles is a major industrial concern in many fluidization systems as it can have undesired effects on the product quality and on the reliable operation of process equipment. Therefore, to accomodate the screening and selection of catalysts for a specific process in fluidized beds, risers, or cyclone applications, their attrition propensity is usually estimated through jet cup attrition testing, where the test material is subjected to high gas velocities in a jet cup. However, this method is far from perfect despite its popularity, largely due to its inconsistency in different testing set-ups. In order to better understandmore » the jet cup testing results as well as their sensitivity to different operating conditions, a coupled computational fluid dynamic (CFD) - discrete element method (DEM) model has been developed in the current study to investigate the particle attrition in a jet cup and its dependence on various factors, e.g. jet velocity, initial particle size, particle density, and apparatus geometry.« less
Direct Numerical Simulation of dense particle-laden turbulent flows using immersed boundaries
NASA Astrophysics Data System (ADS)
Wang, Fan; Desjardins, Olivier
2009-11-01
Dense particle-laden turbulent flows play an important role in many engineering applications, ranging from pharmaceutical coating and chemical synthesis to fluidized bed reactors. Because of the complexity of the physics involved in these flows, current computational models for gas-particle processes, such as drag and heat transfer, rely on empirical correlations and have been shown to lack accuracy. In this work, direct numerical simulations (DNS) of dense particle-laden flows are conducted, using immersed boundaries (IB) to resolve the flow around each particle. First, the accuracy of the proposed approach is tested on a range of 2D and 3D flows at various Reynolds numbers, and resolution requirements are discussed. Then, various particle arrangements and number densities are simulated, the impact on particle wake interaction is assessed, and existing drag models are evaluated in the case of fixed particles. In addition, the impact of the particles on turbulence dissipation is investigated. Finally, a strategy for handling moving and colliding particles is discussed.
Particle Size Reduction in Geophysical Granular Flows: The Role of Rock Fragmentation
NASA Astrophysics Data System (ADS)
Bianchi, G.; Sklar, L. S.
2016-12-01
Particle size reduction in geophysical granular flows is caused by abrasion and fragmentation, and can affect transport dynamics by altering the particle size distribution. While the Sternberg equation is commonly used to predict the mean abrasion rate in the fluvial environment, and can also be applied to geophysical granular flows, predicting the evolution of the particle size distribution requires a better understanding the controls on the rate of fragmentation and the size distribution of resulting particle fragments. To address this knowledge gap we are using single-particle free-fall experiments to test for the influence of particle size, impact velocity, and rock properties on fragmentation and abrasion rates. Rock types tested include granodiorite, basalt, and serpentinite. Initial particle masses and drop heights range from 20 to 1000 grams and 0.1 to 3.0 meters respectively. Preliminary results of free-fall experiments suggest that the probability of fragmentation varies as a power function of kinetic energy on impact. The resulting size distributions of rock fragments can be collapsed by normalizing by initial particle mass, and can be fit with a generalized Pareto distribution. We apply the free-fall results to understand the evolution of granodiorite particle-size distributions in granular flow experiments using rotating drums ranging in diameter from 0.2 to 4.0 meters. In the drums, we find that the rates of silt production by abrasion and gravel production by fragmentation scale with drum size. To compare these rates with free-fall results we estimate the particle impact frequency and velocity. We then use population balance equations to model the evolution of particle size distributions due to the combined effects of abrasion and fragmentation. Finally, we use the free-fall and drum experimental results to model particle size evolution in Inyo Creek, a steep, debris-flow dominated catchment, and compare model results to field measurements.
Particle drag history in a subcritical post-shock flow - data analysis method and uncertainty
NASA Astrophysics Data System (ADS)
Ding, Liuyang; Bordoloi, Ankur; Adrian, Ronald; Prestridge, Kathy; Arizona State University Team; Los Alamos National Laboratory Team
2017-11-01
A novel data analysis method for measuring particle drag in an 8-pulse particle tracking velocimetry-accelerometry (PTVA) experiment is described. We represented the particle drag history, CD(t) , using polynomials up to the third order. An analytical model for continuous particle position history was derived by integrating an equation relating CD(t) with particle velocity and acceleration. The coefficients of CD(t) were then calculated by fitting the position history model to eight measured particle locations in the sense of least squares. A preliminary test with experimental data showed that the new method yielded physically more reasonable particle velocity and acceleration history compared to conventionally adopted polynomial fitting. To fully assess and optimize the performance of the new method, we performed a PTVA simulation by assuming a ground truth of particle motion based on an ensemble of experimental data. The results indicated a significant reduction in the RMS error of CD. We also found that for particle locating noise between 0.1 and 3 pixels, a range encountered in our experiment, the lowest RMS error was achieved by using the quadratic CD(t) model. Furthermore, we will also discuss the optimization of the pulse timing configuration.
Artificial neural network based particle size prediction of polymeric nanoparticles.
Youshia, John; Ali, Mohamed Ehab; Lamprecht, Alf
2017-10-01
Particle size of nanoparticles and the respective polydispersity are key factors influencing their biopharmaceutical behavior in a large variety of therapeutic applications. Predicting these attributes would skip many preliminary studies usually required to optimize formulations. The aim was to build a mathematical model capable of predicting the particle size of polymeric nanoparticles produced by a pharmaceutical polymer of choice. Polymer properties controlling the particle size were identified as molecular weight, hydrophobicity and surface activity, and were quantified by measuring polymer viscosity, contact angle and interfacial tension, respectively. A model was built using artificial neural network including these properties as input with particle size and polydispersity index as output. The established model successfully predicted particle size of nanoparticles covering a range of 70-400nm prepared from other polymers. The percentage bias for particle prediction was 2%, 4% and 6%, for the training, validation and testing data, respectively. Polymer surface activity was found to have the highest impact on the particle size followed by viscosity and finally hydrophobicity. Results of this study successfully highlighted polymer properties affecting particle size and confirmed the usefulness of artificial neural networks in predicting the particle size and polydispersity of polymeric nanoparticles. Copyright © 2017 Elsevier B.V. All rights reserved.
A Core-Particle Model for Periodically Focused Ion Beams with Intense Space-Charge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lund, S M; Barnard, J J; Bukh, B
2006-08-02
A core-particle model is derived to analyze transverse orbits of test particles evolving in the presence of a core ion beam described by the KV distribution. The core beam has uniform density within an elliptical cross-section and can be applied to model both quadrupole and solenoidal focused beams in periodic or aperiodic lattices. Efficient analytical descriptions of electrostatic space-charge fields external to the beam core are derived to simplify model equations. Image charge effects are analyzed for an elliptical beam centered in a round, conducting pipe to estimate model corrections resulting from image charge nonlinearities. Transformations are employed to removemore » coherent utter motion associated with oscillations of the ion beam core due to rapidly varying, linear applied focusing forces. Diagnostics for particle trajectories, Poincare phase-space projections, and single-particle emittances based on these transformations better illustrate the effects of nonlinear forces acting on particles evolving outside the core. A numerical code has been written based on this model. Example applications illustrate model characteristics. The core-particle model described has recently been applied to identify physical processes leading to space-charge transport limits for an rms matched beam in a periodic quadrupole focusing channel [Lund and Chawla, Nuc. Instr. and Meth. A 561, 203 (2006)]. Further characteristics of these processes are presented here.« less
CFD Modelling of Particle Mixtures in a 2D CFB
NASA Astrophysics Data System (ADS)
Seppälä, M.; Kallio, S.
The capability of Fluent 6.2.16 to simulate particle mixtures in a laboratory scale 2D circulating fluidized bed (CFB) unit has been tested. In the simulations, the solids were described as one or two particle phases. The loading ratio of small to large particles, particle diameters and the gas inflow velocity were varied. The 40 cm wide and 3 m high 2D CFB was modeled using a grid with 31080 cells. The outflow of particles at the top of the CFB was monitored and emanated particles were fed back to the riser through a return duct. The paper presents the segregation patterns of the particle phases obtained from the simulations. When the fraction of large particles was 50% or larger, large particles segregated, as expected, to the wall regions and to the bottom part of the riser. However, when the fraction of large particles was 10%, an excess of large particles was found in the upper half of the riser. The explanation for this unexpected phenomenon was found in the distribution of the large particles between the slow clusters and the faster moving lean suspension.
A Constitutive Relationship for Gravelly Soil Considering Fine Particle Suffusion
Zhang, Yuning; Chen, Yulong
2017-01-01
Suffusion erosion may occur in sandy gravel dam foundations that use suspended cutoff walls. This erosion causes a loss of fine particles, degrades the soil strength and deformation moduli, and adversely impacts the cutoff walls of the dam foundation, as well as the overlying dam body. A comprehensive evaluation of these effects requires models that quantitatively describe the effects of fine particle losses on the stress-strain relationships of sandy gravels. In this work, we propose an experimental scheme for studying these types of models, and then perform triaxial and confined compression tests to determine the effects of particle losses on the stress-strain relationships. Considering the Duncan-Chang E-B model, quantitative expressions describing the relationship between the parameters of the model and the particle losses were derived. The results show that particle losses did not alter the qualitative stress-strain characteristics of the soils; however, the soil strength and deformation moduli were degraded. By establishing the relationship between the parameters of the model and the losses, the same model can then be used to describe the relationship between sandy gravels and erosion levels that vary in both time and space. PMID:29065532
AMS-02 Cryocooler Baseline Configuration and Engineering Model Qualification Test Results
NASA Technical Reports Server (NTRS)
Banks, Stuart; Breon, Susan; Shirey, Kimberly
2003-01-01
Four Sunpower M87N Stirling-cycle cryocoolers will be used to extend the lifetime of the Alpha Magnetic Spectrometer-02 (AMS-02) experiment. The cryocoolers will be mounted to the AMS-02 vacuum case using a structure that will thermally and mechanically decouple the cryocooler from the vacuum case while providing compliance to allow force attenuation using a passive balancer system. The cryocooler drive is implemented using a 60Hz pulse duration modulated square wave. Details of the testing program, mounting assembly and drive scheme will be presented. AMS-02 is a state-of-the-art particle physics detector containing a large superfluid helium-cooled superconducting magnet. Highly sensitive detector plates inside the magnet measure a particle s speed, momentum, charge, and path. The AMS-02 experiment, which will be flown as an attached payload on the International Space Station, will study the properties and origin of cosmic particles and nuclei including antimatter and dark matter. Two engineering model cryocoolers have been under test at NASA Goddard since November 2001. Qualification testing of the engineering model cryocooler bracket assembly is near completion. Delivery of the flight cryocoolers to Goddard is scheduled for September 2003.
Orion Exploration Flight Test Post-Flight Inspection and Analysis
NASA Technical Reports Server (NTRS)
Miller, J. E.; Berger, E. L.; Bohl, W. E.; Christiansen, E. L.; Davis, B. A.; Deighton, K. D.; Enriquez, P. A.; Garcia, M. A.; Hyde, J. L.; Oliveras, O. M.
2017-01-01
The principal mechanism for developing orbital debris environment models, is to make observations of larger pieces of debris in the range of several centimeters and greater using radar and optical techniques. For particles that are smaller than this threshold, breakup and migration models of particles to returned surfaces in lower orbit are relied upon to quantify the flux. This reliance on models to derive spatial densities of particles that are of critical importance to spacecraft make the unique nature of the EFT-1's return surface a valuable metric. To this end detailed post-flight inspections have been performed of the returned EFT-1 backshell, and the inspections identified six candidate impact sites that were not present during the pre-flight inspections. This paper describes the post-flight analysis efforts to characterize the EFT-1 mission craters. This effort included ground based testing to understand small particle impact craters in the thermal protection material, the pre- and post-flight inspection, the crater analysis using optical, X-ray computed tomography (CT) and scanning electron microscope (SEM) techniques, and numerical simulations.
AGR-3/4 Irradiation Test Predictions using PARFUME
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skerjanc, William Frances; Collin, Blaise Paul
2016-03-01
PARFUME, a fuel performance modeling code used for high temperature gas reactors, was used to model the AGR-3/4 irradiation test using as-run physics and thermal hydraulics data. The AGR-3/4 test is the combined third and fourth planned irradiations of the Advanced Gas Reactor (AGR) Fuel Development and Qualification Program. The AGR-3/4 test train consists of twelve separate and independently controlled and monitored capsules. Each capsule contains four compacts filled with both uranium oxycarbide (UCO) unaltered “driver” fuel particles and UCO designed-to-fail (DTF) fuel particles. The DTF fraction was specified to be 1×10-2. This report documents the calculations performed to predictmore » failure probability of TRISO-coated fuel particles during the AGR-3/4 experiment. In addition, this report documents the calculated source term from both the driver fuel and DTF particles. The calculations include the modeling of the AGR-3/4 irradiation that occurred from December 2011 to April 2014 in the Advanced Test Reactor (ATR) over a total of ten ATR cycles including seven normal cycles, one low power cycle, one unplanned outage cycle, and one Power Axial Locator Mechanism cycle. Results show that failure probabilities are predicted to be low, resulting in zero fuel particle failures per capsule. The primary fuel particle failure mechanism occurred as a result of localized stresses induced by the calculated IPyC cracking. Assuming 1,872 driver fuel particles per compact, failure probability calculated by PARFUME leads to no predicted particle failure in the AGR-3/4 driver fuel. In addition, the release fraction of fission products Ag, Cs, and Sr were calculated to vary depending on capsule location and irradiation temperature. The maximum release fraction of Ag occurs in Capsule 7 reaching up to 56% for the driver fuel and 100% for the DTF fuel. The release fraction of the other two fission products, Cs and Sr, are much smaller and in most cases less than 1% for the driver fuel. The notable exception occurs in Capsule 7 where the release fraction for Cs and Sr reach up to 0.73% and 2.4%, respectively, for the driver fuel. For the DTF fuel in Capsule 7, the release fraction for Cs and Sr are estimated to be 100% and 5%, respectively.« less
A Hybrid Physics-Based Data-Driven Approach for Point-Particle Force Modeling
NASA Astrophysics Data System (ADS)
Moore, Chandler; Akiki, Georges; Balachandar, S.
2017-11-01
This study improves upon the physics-based pairwise interaction extended point-particle (PIEP) model. The PIEP model leverages a physical framework to predict fluid mediated interactions between solid particles. While the PIEP model is a powerful tool, its pairwise assumption leads to increased error in flows with high particle volume fractions. To reduce this error, a regression algorithm is used to model the differences between the current PIEP model's predictions and the results of direct numerical simulations (DNS) for an array of monodisperse solid particles subjected to various flow conditions. The resulting statistical model and the physical PIEP model are superimposed to construct a hybrid, physics-based data-driven PIEP model. It must be noted that the performance of a pure data-driven approach without the model-form provided by the physical PIEP model is substantially inferior. The hybrid model's predictive capabilities are analyzed using more DNS. In every case tested, the hybrid PIEP model's prediction are more accurate than those of physical PIEP model. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1315138 and the U.S. DOE, NNSA, ASC Program, as a Cooperative Agreement under Contract No. DE-NA0002378.
Design and Qualification of the AMS-02 Flight Cryocoolers
NASA Technical Reports Server (NTRS)
Shirey, Kimberly; Banks,Stuart; Boyle, Rob; Unger, Reuven
2005-01-01
Four commercial Sunpower M87N Stirling-cycle cryocoolers will be used to extend the lifetime of the Alpha Magnetic Spectrometer-02 (AMS-02) experiment. The cryocoolers will be mounted to the AMS-02 vacuum case using a structure that will thermally and mechanically decouple the cryocooler from the vacuum case. This paper discusses modifications of the Sunpower M87N cryocooler to make it acceptable for space flight applications and suitable for use on AMS-02. Details of the flight model qualification test program are presented. AMS-02 is a state-of-the-art particle physics detector containing a large superfluid helium-cooled superconducting magnet. Highly sensitive detector plates inside the magnet measure a particle's speed, mass, charge, and direction. The AMS-02 experiment, which will be flown as an attached payload on the International Space Station, will study the properties and origin of cosmic particles and nuclei including antimatter and dark matter. Two engineering model cryocoolers have been under test at NASA Goddard since November 2001. Qualification testing of the engineering model cryocooler bracket assembly including random vibration and thermal vacuum testing was completed at the end of April 2005. The flight cryocoolers were received in December 2003. Acceptance testing of the flight cryocooler bracket assemblies began in May 2005 .
Moroz, Brian E; Beck, Harold L; Bouville, André; Simon, Steven L
2010-08-01
The NOAA Hybrid Single-Particle Lagrangian Integrated Trajectory Model (HYSPLIT) was evaluated as a research tool to simulate the dispersion and deposition of radioactive fallout from nuclear tests. Model-based estimates of fallout can be valuable for use in the reconstruction of past exposures from nuclear testing, particularly where little historical fallout monitoring data are available. The ability to make reliable predictions about fallout deposition could also have significant importance for nuclear events in the future. We evaluated the accuracy of the HYSPLIT-predicted geographic patterns of deposition by comparing those predictions against known deposition patterns following specific nuclear tests with an emphasis on nuclear weapons tests conducted in the Marshall Islands. We evaluated the ability of the computer code to quantitatively predict the proportion of fallout particles of specific sizes deposited at specific locations as well as their time of transport. In our simulations of fallout from past nuclear tests, historical meteorological data were used from a reanalysis conducted jointly by the National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR). We used a systematic approach in testing the HYSPLIT model by simulating the release of a range of particle sizes from a range of altitudes and evaluating the number and location of particles deposited. Our findings suggest that the quantity and quality of meteorological data are the most important factors for accurate fallout predictions and that, when satisfactory meteorological input data are used, HYSPLIT can produce relatively accurate deposition patterns and fallout arrival times. Furthermore, when no other measurement data are available, HYSPLIT can be used to indicate whether or not fallout might have occurred at a given location and provide, at minimum, crude quantitative estimates of the magnitude of the deposited activity. A variety of simulations of the deposition of fallout from atmospheric nuclear tests conducted in the Marshall Islands (mid-Pacific), at the Nevada Test Site (U.S.), and at the Semipalatinsk Nuclear Test Site (Kazakhstan) were performed. The results of the Marshall Islands simulations were used in a limited fashion to support the dose reconstruction described in companion papers within this volume.
Moroz, Brian E.; Beck, Harold L.; Bouville, André; Simon, Steven L.
2013-01-01
The NOAA Hybrid Single-Particle Lagrangian Integrated Trajectory Model (HYSPLIT) was evaluated as a research tool to simulate the dispersion and deposition of radioactive fallout from nuclear tests. Model-based estimates of fallout can be valuable for use in the reconstruction of past exposures from nuclear testing, particularly, where little historical fallout monitoring data is available. The ability to make reliable predictions about fallout deposition could also have significant importance for nuclear events in the future. We evaluated the accuracy of the HYSPLIT-predicted geographic patterns of deposition by comparing those predictions against known deposition patterns following specific nuclear tests with an emphasis on nuclear weapons tests conducted in the Marshall Islands. We evaluated the ability of the computer code to quantitatively predict the proportion of fallout particles of specific sizes deposited at specific locations as well as their time of transport. In our simulations of fallout from past nuclear tests, historical meteorological data were used from a reanalysis conducted jointly by the National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR). We used a systematic approach in testing the HYSPLIT model by simulating the release of a range of particles sizes from a range of altitudes and evaluating the number and location of particles deposited. Our findings suggest that the quantity and quality of meteorological data are the most important factors for accurate fallout predictions and that when satisfactory meteorological input data are used, HYSPLIT can produce relatively accurate deposition patterns and fallout arrival times. Furthermore, when no other measurement data are available, HYSPLIT can be used to indicate whether or not fallout might have occurred at a given location and provide, at minimum, crude quantitative estimates of the magnitude of the deposited activity. A variety of simulations of the deposition of fallout from atmospheric nuclear tests conducted in the Marshall Islands, at the Nevada Test Site (USA), and at the Semipalatinsk Nuclear Test Site (Kazakhstan) were performed using reanalysis data composed of historic meteorological observations. The results of the Marshall Islands simulations were used in a limited fashion to support the dose reconstruction described in companion papers within this volume. PMID:20622555
NASA Technical Reports Server (NTRS)
Trolinger, James D.; Rangel, Roger; Witherow, William; Rogers, Jan; Lal, Ravindra B.
1999-01-01
A need exists for understanding precisely how particles move and interact in a fluid in the absence of gravity. Such understanding is required, for example, for modeling and predicting crystal growth in space where crystals grow from solution around nucleation sites as well as for any study of particles or bubbles in liquids or in experiments where particles are used as tracers for mapping microconvection. We have produced an exact solution to the general equation of motion of particles at extremely low Reynolds number in microgravity that covers a wide range of interesting conditions. We have also developed diagnostic tools and experimental techniques to test the validity of the general equation . This program, which started in May, 1998, will produce the flight definition for an experiment in a microgravity environment of space to validate the theoretical model. We will design an experiment with the help of the theoretical model that is optimized for testing the model, measuring g, g-jitter, and other microgravity phenomena. This paper describes the goals, rational, and approach for the flight definition program. The first objective of this research is to understand the physics of particle interactions with fluids and other particles in low Reynolds number flows in microgravity. Secondary objectives are to (1) observe and quantify g-jitter effects and microconvection on particles in fluids, (2) validate an exact solution to the general equation of motion of a particle in a fluid, and (3) to characterize the ability of isolation tables to isolate experiments containing particle in liquids. The objectives will be achieved by recording a large number of holograms of particle fields in microgravity under controlled conditions, extracting the precise three-dimensional position of all of the particles as a function of time and examining the effects of all parameters on the motion of the particles. The feasibility for achieving these results has already been established in the ongoing ground-based NRA, which led to the "virtual spaceflight chamber" concept.
NASA Astrophysics Data System (ADS)
Watanabe, Tomoaki; Nagata, Koji
2016-11-01
The mixing volume model (MVM), which is a mixing model for molecular diffusion in Lagrangian simulations of turbulent mixing problems, is proposed based on the interactions among spatially distributed particles in a finite volume. The mixing timescale in the MVM is derived by comparison between the model and the subgrid scale scalar variance equation. A-priori test of the MVM is conducted based on the direct numerical simulations of planar jets. The MVM is shown to predict well the mean effects of the molecular diffusion under various conditions. However, a predicted value of the molecular diffusion term is positively correlated to the exact value in the DNS only when the number of the mixing particles is larger than two. Furthermore, the MVM is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (ILES/LPS). The ILES/LPS with the present mixing model predicts well the decay of the scalar variance in planar jets. This work was supported by JSPS KAKENHI Nos. 25289030 and 16K18013. The numerical simulations presented in this manuscript were carried out on the high performance computing system (NEC SX-ACE) in the Japan Agency for Marine-Earth Science and Technology.
Modeling of brittle-viscous flow using discrete particles
NASA Astrophysics Data System (ADS)
Thordén Haug, Øystein; Barabasch, Jessica; Virgo, Simon; Souche, Alban; Galland, Olivier; Mair, Karen; Abe, Steffen; Urai, Janos L.
2017-04-01
Many geological processes involve both viscous flow and brittle fractures, e.g. boudinage, folding and magmatic intrusions. Numerical modeling of such viscous-brittle materials poses challenges: one has to account for the discrete fracturing, the continuous viscous flow, the coupling between them, and potential pressure dependence of the flow. The Discrete Element Method (DEM) is a numerical technique, widely used for studying fracture of geomaterials. However, the implementation of viscous fluid flow in discrete element models is not trivial. In this study, we model quasi-viscous fluid flow behavior using Esys-Particle software (Abe et al., 2004). We build on the methodology of Abe and Urai (2012) where a combination of elastic repulsion and dashpot interactions between the discrete particles is implemented. Several benchmarks are presented to illustrate the material properties. Here, we present extensive, systematic material tests to characterize the rheology of quasi-viscous DEM particle packing. We present two tests: a simple shear test and a channel flow test, both in 2D and 3D. In the simple shear tests, simulations were performed in a box, where the upper wall is moved with a constant velocity in the x-direction, causing shear deformation of the particle assemblage. Here, the boundary conditions are periodic on the sides, with constant forces on the upper and lower walls. In the channel flow tests, a piston pushes a sample through a channel by Poisseuille flow. For both setups, we present the resulting stress-strain relationships over a range of material parameters, confining stress and strain rate. Results show power-law dependence between stress and strain rate, with a non-linear dependence on confining force. The material is strain softening under some conditions (which). Additionally, volumetric strain can be dilatant or compactant, depending on porosity, confining pressure and strain rate. Constitutive relations are implemented in a way that limits the range of viscosities. For identical pressure and strain rate, an order of magnitude range in viscosity can be investigated. The extensive material testing indicates that DEM particles interacting by a combination of elastic repulsion and dashpots can be used to model viscous flows. This allows us to exploit the fracturing capabilities of the discrete element methods and study systems that involve both viscous flow and brittle fracturing. However, the small viscosity range achievable using this approach does constraint the applicability for systems where larger viscosity ranges are required, such as folding of viscous layers of contrasting viscosities. References: Abe, S., Place, D., & Mora, P. (2004). A parallel implementation of the lattice solid model for the simulation of rock mechanics and earthquake dynamics. PAGEOPH, 161(11-12), 2265-2277. http://doi.org/10.1007/s00024-004-2562-x Abe, S., and J. L. Urai (2012), Discrete element modeling of boudinage: Insights on rock rheology, matrix flow, and evolution of geometry, JGR., 117, B01407, doi:10.1029/2011JB00855
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zevenhoven, C.A.P.; Yrjas, K.P.; Hupa, M.M.
1996-03-01
The physical structure of a limestone or dolomite to be used in in-bed sulfur capture in fluidized bed gasifiers has a great impact on the efficiency of sulfur capture and sorbent use. In this study an unreacted shrinking core model with variable effective diffusivity is applied to sulfidation test data from a pressurized thermogravimetric apparatus (P-TGA) for a set of physically and chemically different limestone and dolomite samples. The particle size was 250--300 {micro}m for all sorbents, which were characterized by chemical composition analysis, particle density measurement, mercury porosimetry, and BET internal surface measurement. Tests were done under typical conditionsmore » for a pressurized fluidized-bed gasifier, i.e., 20% CO{sub 2}, 950 C, 20 bar. At these conditions the limestone remains uncalcined, while the dolomite is half-calcined. Additional tests were done at low CO{sub 2} partial pressures, yielding calcined limestone and fully calcined dolomite. The generalized model allows for determination of values for the initial reaction rate and product layer diffusivity.« less
Zhan, Xiaobin; Jiang, Shulan; Yang, Yili; Liang, Jian; Shi, Tielin; Li, Xiwen
2015-09-18
This paper proposes an ultrasonic measurement system based on least squares support vector machines (LS-SVM) for inline measurement of particle concentrations in multicomponent suspensions. Firstly, the ultrasonic signals are analyzed and processed, and the optimal feature subset that contributes to the best model performance is selected based on the importance of features. Secondly, the LS-SVM model is tuned, trained and tested with different feature subsets to obtain the optimal model. In addition, a comparison is made between the partial least square (PLS) model and the LS-SVM model. Finally, the optimal LS-SVM model with the optimal feature subset is applied to inline measurement of particle concentrations in the mixing process. The results show that the proposed method is reliable and accurate for inline measuring the particle concentrations in multicomponent suspensions and the measurement accuracy is sufficiently high for industrial application. Furthermore, the proposed method is applicable to the modeling of the nonlinear system dynamically and provides a feasible way to monitor industrial processes.
Single particle analysis based on Zernike phase contrast transmission electron microscopy.
Danev, Radostin; Nagayama, Kuniaki
2008-02-01
We present the first application of Zernike phase-contrast transmission electron microscopy to single-particle 3D reconstruction of a protein, using GroEL chaperonin as the test specimen. We evaluated the performance of the technique by comparing 3D models derived from Zernike phase contrast imaging, with models from conventional underfocus phase contrast imaging. The same resolution, about 12A, was achieved by both imaging methods. The reconstruction based on Zernike phase contrast data required about 30% fewer particles. The advantages and prospects of each technique are discussed.
Van den Heuvel, Frank
2014-01-01
Purpose To present a closed formalism calculating charged particle radiation damage induced in DNA. The formalism is valid for all types of charged particles and due to its closed nature is suited to provide fast conversion of dose to DNA-damage. Methods The induction of double strand breaks in DNA–strings residing in irradiated cells is quantified using a single particle model. This leads to a proposal to use the cumulative Cauchy distribution to express the mix of high and low LET type damage probability generated by a single particle. A microscopic phenomenological Monte Carlo code is used to fit the parameters of the model as a function of kinetic energy related to the damage to a DNA molecule embedded in a cell. The model is applied for four particles: electrons, protons, alpha–particles, and carbon ions. A geometric interpretation of this observation using the impact ionization mean free path as a quantifier, allows extension of the model to very low energies. Results The mathematical expression describes the model adequately using a chi–square test (). This applies to all particle types with an almost perfect fit for protons, while the other particles seem to result in some discrepancies at very low energies. The implementation calculating a strict version of the RBE based on complex damage alone is corroborated by experimental data from the measured RBE. The geometric interpretation generates a unique dimensionless parameter for each type of charged particle. In addition, it predicts a distribution of DNA damage which is different from the current models. PMID:25340636
Use of mucolytics to enhance magnetic particle retention at a model airway surface
NASA Astrophysics Data System (ADS)
Ally, Javed; Roa, Wilson; Amirfazli, A.
A previous study has shown that retention of magnetic particles at a model airway surface requires prohibitively strong magnetic fields. As mucus viscoelasticity is the most significant factor contributing to clearance of magnetic particles from the airway surface, mucolytics are considered in this study to reduce mucus viscoelasticity and enable particle retention with moderate strength magnetic fields. The excised frog palate model was used to simulate the airway surface. Two mucolytics, N-acetylcysteine (NAC) and dextran sulfate (DS) were tested. NAC was found to enable retention at moderate field values (148 mT with a gradient of 10.2 T/m), whereas DS was found to be effective only for sufficiently large particle concentrations at the airway surface. The possible mechanisms for the observed behavior with different mucolytics are also discussed based on aggregate formation and the loading of cilia.
Sawakuchi, Gabriel O; Yukihara, Eduardo G
2012-01-21
The objective of this work is to test analytical models to calculate the luminescence efficiency of Al(2)O(3):C optically stimulated luminescence detectors (OSLDs) exposed to heavy charged particles with energies relevant to space dosimetry and particle therapy. We used the track structure model to obtain an analytical expression for the relative luminescence efficiency based on the average radial dose distribution produced by the heavy charged particle. We compared the relative luminescence efficiency calculated using seven different radial dose distribution models, including a modified model introduced in this work, with experimental data. The results obtained using the modified radial dose distribution function agreed within 20% with experimental data from Al(2)O(3):C OSLDs relative luminescence efficiency for particles with atomic number ranging from 1 to 54 and linear energy transfer in water from 0.2 up to 1368 keV µm(-1). In spite of the significant improvement over other radial dose distribution models, understanding of the underlying physical processes associated with these radial dose distribution models remain elusive and may represent a limitation of the track structure model.
Estimating degradation-related settlement in two landfill-reclaimed soils by sand-salt analogues.
McDougall, J R; Fleming, I R; Thiel, R; Dewaele, P; Parker, D; Kelly, D
2018-04-25
Landfill reclaimed soil here refers to largely degraded materials excavated from old landfill sites, which after processing can be reinstated as more competent fill, thereby restoring the former landfill space. The success of the process depends on the presence of remaining degradable particles and their influence on settlement. Tests on salt-sand mixtures, from which the salt is removed, have been used to quantify the impact of particle loss on settlement. Where the amount of particle loss is small, say 10% by mass or less, settlements are small and apparently independent of lost particle size. A conceptual model is presented to explain this behaviour in terms of nestling particles and strong force chains. At higher percentages of lost particles, greater rates of settlement together with some sensitivity to particle size were observed. The conceptual model was then applied to two landfill reclaimed soils, the long-term settlements of which were found to be consistent with the conceptual model suggesting that knowledge of particle content and relative size are sufficient to estimate the influence of degradable particles in landfill reclaimed soils. Copyright © 2018 Elsevier Ltd. All rights reserved.
Variation that can be expected when using particle tracking models in connectivity studies
NASA Astrophysics Data System (ADS)
Hufnagl, Marc; Payne, Mark; Lacroix, Geneviève; Bolle, Loes J.; Daewel, Ute; Dickey-Collas, Mark; Gerkema, Theo; Huret, Martin; Janssen, Frank; Kreus, Markus; Pätsch, Johannes; Pohlmann, Thomas; Ruardij, Piet; Schrum, Corinna; Skogen, Morten D.; Tiessen, Meinard C. H.; Petitgas, Pierre; van Beek, Jan K. L.; van der Veer, Henk W.; Callies, Ulrich
2017-09-01
Hydrodynamic Ocean Circulation Models and Lagrangian particle tracking models are valuable tools e.g. in coastal ecology to identify the connectivity between offshore spawning and coastal nursery areas of commercially important fish, for risk assessment and more for defining or evaluating marine protected areas. Most studies are based on only one model and do not provide levels of uncertainty. Here this uncertainty was addressed by applying a suite of 11 North Sea models to test what variability can be expected concerning connectivity. Different notional test cases were calculated related to three important and well-studied North Sea fish species: herring (Clupea harengus), and the flatfishes sole (Solea solea) and plaice (Pleuronectes platessa). For sole and plaice we determined which fraction of particles released in the respective spawning areas would reach a coastal marine protected area. For herring we determined the fraction located in a wind park after a predefined time span. As temperature is more and more a focus especially in biological and global change studies, furthermore inter-model variability in temperatures experienced by the virtual particles was determined. The main focus was on the transport variability originating from the physical models and thus biological behavior was not included. Depending on the scenario, median experienced temperatures differed by 3 °C between years. The range between the different models in one year was comparable to this temperature range observed between modelled years. Connectivity between flatfish spawning areas and the coastal protected area was highly dependent on the release location and spawning time. No particles released in the English Channel in the sole scenario reached the protected area while up to 20% of the particles released in the plaice scenario did. Interannual trends in transport directions and connectivity rates were comparable between models but absolute values displayed high variations. Most models showed systematic biases during all years in comparison to the ensemble median, indicating that in general interannual variation was represented but absolute values varied. In conclusion: variability between models is generally high and management decisions or scientific analysis using absolute values from only one single model might be biased and results or conclusions drawn from such studies need to be treated with caution. We further concluded that more true validation data for particle modelling are required.
Implementation and Re nement of a Comprehensive Model for Dense Granular Flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundaresan, Sankaran
2015-09-30
Dense granular ows are ubiquitous in both natural and industrial processes. They manifest three di erent ow regimes, each exhibiting its own dependence on solids volume fraction, shear rate, and particle-level properties. This research project sought to develop continuum rheological models for dense granular ows that bridges multiple regimes of ow, implement them in open-source platforms for gas-particle ows and perform test simulations. The rst phase of the research covered in this project involved implementation of a steady- shear rheological model that bridges quasi-static, intermediate and inertial regimes of ow into MFIX (Multiphase Flow with Interphase eXchanges - a generalmore » purpose computer code developed at the National Energy Technology Laboratory). MFIX simulations of dense granular ows in hourglass-shaped hopper were then performed as test examples. The second phase focused on formulation of a modi ed kinetic theory for frictional particles that can be used over a wider range of particle volume fractions and also apply for dynamic, multi- dimensional ow conditions. To guide this work, simulations of simple shear ows of identical mono-disperse spheres were also performed using the discrete element method. The third phase of this project sought to develop and implement a more rigorous treatment of boundary e ects. Towards this end, simulations of simple shear ows of identical mono-disperse spheres con ned between parallel plates were performed and analyzed to formulate compact wall boundary conditions that can be used for dense frictional ows at at frictional boundaries. The fourth phase explored the role of modest levels of cohesive interactions between particles on the dense phase rheology. The nal phase of this project focused on implementation and testing of the modi ed kinetic theory in MFIX and running bin-discharge simulations as test examples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Po-Yen; Chen, Liu; Institute for Fusion Theory and Simulation, Zhejiang University, 310027 Hangzhou
2015-09-15
The thermal relaxation time of a one-dimensional plasma has been demonstrated to scale with N{sub D}{sup 2} due to discrete particle effects by collisionless particle-in-cell (PIC) simulations, where N{sub D} is the particle number in a Debye length. The N{sub D}{sup 2} scaling is consistent with the theoretical analysis based on the Balescu-Lenard-Landau kinetic equation. However, it was found that the thermal relaxation time is anomalously shortened to scale with N{sub D} while externally introducing the Krook type collision model in the one-dimensional electrostatic PIC simulation. In order to understand the discrete particle effects enhanced by the Krook type collisionmore » model, the superposition principle of dressed test particles was applied to derive the modified Balescu-Lenard-Landau kinetic equation. The theoretical results are shown to be in good agreement with the simulation results when the collisional effects dominate the plasma system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amadio, G.; et al.
An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less
REBOUND-ing Off Asteroids: An N-body Particle Model for Ejecta Dynamics on Small Bodies
NASA Astrophysics Data System (ADS)
Larson, Jennifer; Sarid, Gal
2017-10-01
Here we describe our numerical approach to model the evolution of ejecta clouds. Modeling with an N-body particle method enables us to study the micro-dynamics while varying the particle size distribution. A hydrodynamic approach loses many of the fine particle-particle interactions included in the N-body particle approach (Artemieva 2008).We use REBOUND, an N-body integration package (Rein et al. 2012) developed to model various dynamical systems (planetary orbits, ring systems, etc.) with high resolution calculations at a lower performance cost than other N-body integrators (Rein & Tamayo 2017). It offers both symplectic (WHFast) and non-symplectic (IAS15) methods (Rein & Spiegel 2014, Rein & Tamayo 2015). We primarily use the IAS15 integrator due to its robustness and accuracy with short interaction distances and non-conservative forces. We implemented a wrapper (developed in Python) to handle changes in time step and integrator at different stages of ejecta particle evolution.To set up the system, each particle is given a velocity away from the target body’s surface at a given angle within a defined ejecta cone. We study the ejecta cloud evolution beginning immediately after an impact rather than the actual impact itself. This model considers effects such as varying particle size distribution, radiation pressure, perturbations from a binary component, particle-particle collisions and non-axisymmetric gravity of the target body. Restrictions on the boundaries of the target body’s surface define the physical shape and help count the number of particles that land on the target body. Later, we will build the central body from individual particles to allow for a wider variety of target body shapes and topographies.With our particle modeling approach, individual particle trajectories are tracked and predicted on short, medium and long timescales. Our approach will be applied to modeling of the ejecta cloud produced during the Double Asteroid Redirection Test (DART) impact (Cheng et al. 2016, Schwartz et al. 2016). We will present some preliminary results of our applied model and possible applications to other asteroid impact events and Centaur ring formation mechanisms.
Particle shape effect on erosion of optical glass substrates due to microparticles
NASA Astrophysics Data System (ADS)
Waxman, Rachel; Gray, Perry; Guven, Ibrahim
2018-03-01
Impact experiments using sand particles and soda lime glass spheres were performed on four distinct glass substrates. Sand particles were characterized using optical and scanning electron microscopy. High-speed video footage from impact tests was used to calculate incoming and rebound velocities of the individual impact events, as well as the particle volume and two-dimensional sphericity. Furthermore, video analysis was used in conjunction with optical and scanning electron microscopy to relate the incoming velocity and particle shape to subsequent fractures, including both radial and lateral cracks. Indentation theory [Marshall et al., J. Am. Ceram. Soc. 65, 561-566 (1982)] was applied and correlated with lateral crack lengths. Multi-variable power law regression was performed, incorporating the particle shape into the model and was shown to have better fit to damage data than the previous indentation model.
ERIC Educational Resources Information Center
Ziegler, Robert Edward
This study is concerned with determining the relative effectiveness of a static and dynamic theoretical model in teaching elementary school students to use the particle idea of matter when explaining certain physical phenomena. A clinical method of personal individual interview-testing, teaching, and retesting of a random sample population from…
A Maximum Entropy Method for Particle Filtering
NASA Astrophysics Data System (ADS)
Eyink, Gregory L.; Kim, Sangil
2006-06-01
Standard ensemble or particle filtering schemes do not properly represent states of low priori probability when the number of available samples is too small, as is often the case in practical applications. We introduce here a set of parametric resampling methods to solve this problem. Motivated by a general H-theorem for relative entropy, we construct parametric models for the filter distributions as maximum-entropy/minimum-information models consistent with moments of the particle ensemble. When the prior distributions are modeled as mixtures of Gaussians, our method naturally generalizes the ensemble Kalman filter to systems with highly non-Gaussian statistics. We apply the new particle filters presented here to two simple test cases: a one-dimensional diffusion process in a double-well potential and the three-dimensional chaotic dynamical system of Lorenz.
NASA Technical Reports Server (NTRS)
Goetz, Michael B.
2011-01-01
The Instrument Simulator Suite for Atmospheric Remote Sensing (ISSARS) entered its third and final year of development with an overall goal of providing a unified tool to simulate active and passive space borne atmospheric remote sensing instruments. These simulations focus on the atmosphere ranging from UV to microwaves. ISSARS handles all assumptions and uses various models on scattering and microphysics to fill the gaps left unspecified by the atmospheric models to create each instrument's measurements. This will help benefit mission design and reduce mission cost, create efficient implementation of multi-instrument/platform Observing System Simulation Experiments (OSSE), and improve existing models as well as new advanced models in development. In this effort, various aerosol particles are incorporated into the system, and a simulation of input wavelength and spectral refractive indices related to each spherical test particle(s) generate its scattering properties and phase functions. These atmospheric particles being integrated into the system comprise the ones observed by the Multi-angle Imaging SpectroRadiometer(MISR) and by the Multiangle SpectroPolarimetric Imager(MSPI). In addition, a complex scattering database generated by Prof. Ping Yang (Texas A&M) is also incorporated into this aerosol database. Future development with a radiative transfer code will generate a series of results that can be validated with results obtained by the MISR and MSPI instruments; nevertheless, test cases are simulated to determine the validity of various plugin libraries used to determine or gather the scattering properties of particles studied by MISR and MSPI, or within the Single-scattering properties of tri-axial ellipsoidal mineral dust particles database created by Prof. Ping Yang.
NASA Astrophysics Data System (ADS)
Barnsley, Lester C.; Carugo, Dario; Aron, Miles; Stride, Eleanor
2017-03-01
The aim of this study was to characterize the behaviour of superparamagnetic particles in magnetic drug targeting (MDT) schemes. A 3-dimensional mathematical model was developed, based on the analytical derivation of the trajectory of a magnetized particle suspended inside a fluid channel carrying laminar flow and in the vicinity of an external source of magnetic force. Semi-analytical expressions to quantify the proportion of captured particles, and their relative accumulation (concentration) as a function of distance along the wall of the channel were also derived. These were expressed in terms of a non-dimensional ratio of the relevant physical and physiological parameters corresponding to a given MDT protocol. The ability of the analytical model to assess magnetic targeting schemes was tested against numerical simulations of particle trajectories. The semi-analytical expressions were found to provide good first-order approximations for the performance of MDT systems in which the magnetic force is relatively constant over a large spatial range. The numerical model was then used to test the suitability of a range of different designs of permanent magnet assemblies for MDT. The results indicated that magnetic arrays that emit a strong magnetic force that varies rapidly over a confined spatial range are the most suitable for concentrating magnetic particles in a localized region. By comparison, commonly used magnet geometries such as button magnets and linear Halbach arrays result in distributions of accumulated particles that are less efficient for delivery. The trajectories predicted by the numerical model were verified experimentally by acoustically focusing magnetic microbeads flowing in a glass capillary channel, and optically tracking their path past a high field gradient Halbach array.
ERIC Educational Resources Information Center
McIntyre, Patrick J.
1974-01-01
Reported is a study to verify the pattern of bias associated with the Model Identification Test and to determine its source. This instrument is a limited verbal science test designed to determine the knowledge possessed by elementary school children of selected concepts related to "the particle nature of matter." (PEB)
NASA Astrophysics Data System (ADS)
Zhu, Weiyao; Li, Jianhui; Lou, Yu
2018-02-01
Polymer flooding has become an effective way to improve the sweep efficiency in many oil fields. Many scholars have carried out a lot of researches on the mechanism of polymer flooding. In this paper, the effect of polymer on seepage is analyzed. The blocking effect of polymer particles was studied experimentally, and the residual resistance coefficient (RRF) were used to represent the blocking effect. We also build a mathematical model for heterogeneous concentration distribution of polymer particles. Furthermore, the effects of polymer particles on reservoir permeability, fluid viscosity and relative permeability are considered, and a two-phase flow model of oil and polymer particles is established. In addition, the model was tested in the heterogeneous stratum model, and three influencing factors, such as particle concentration, injection volume and PPD (short for polymer particle dispersion) injection time, were analyzed. Simulation results show that PPD can effectively improve sweep efficiency and especially improve oil recovery of low permeability layer. Oil recovery increases with the increase of particle concentration, but oil recovery increase rate gradually decreases with that. The greater the injected amount of PPD, the greater oil recovery and the smaller oil recovery increase rate. And there is an optimal timing to inject PPD for specific reservoir.
Particle Acceleration in a Statistically Modeled Solar Active-Region Corona
NASA Astrophysics Data System (ADS)
Toutounzi, A.; Vlahos, L.; Isliker, H.; Dimitropoulou, M.; Anastasiadis, A.; Georgoulis, M.
2013-09-01
Elaborating a statistical approach to describe the spatiotemporally intermittent electric field structures formed inside a flaring solar active region, we investigate the efficiency of such structures in accelerating charged particles (electrons). The large-scale magnetic configuration in the solar atmosphere responds to the strong turbulent flows that convey perturbations across the active region by initiating avalanche-type processes. The resulting unstable structures correspond to small-scale dissipation regions hosting strong electric fields. Previous research on particle acceleration in strongly turbulent plasmas provides a general framework for addressing such a problem. This framework combines various electromagnetic field configurations obtained by magnetohydrodynamical (MHD) or cellular automata (CA) simulations, or by employing a statistical description of the field's strength and configuration with test particle simulations. Our objective is to complement previous work done on the subject. As in previous efforts, a set of three probability distribution functions describes our ad-hoc electromagnetic field configurations. In addition, we work on data-driven 3D magnetic field extrapolations. A collisional relativistic test-particle simulation traces each particle's guiding center within these configurations. We also find that an interplay between different electron populations (thermal/non-thermal, ambient/injected) in our simulations may also address, via a re-acceleration mechanism, the so called `number problem'. Using the simulated particle-energy distributions at different heights of the cylinder we test our results against observations, in the framework of the collisional thick target model (CTTM) of solar hard X-ray (HXR) emission. The above work is supported by the Hellenic National Space Weather Research Network (HNSWRN) via the THALIS Programme.
Deducing Shape of Anisotropic Particles in Solution from Light Scattering: Spindles and Nanorods
NASA Astrophysics Data System (ADS)
Tsuper, Ilona; Terrano, Daniel; Streletzky, Kiril A.; Dement'eva, Olga V.; Semyonov, Sergey A.; Rudoy, Victor M.
Depolarized Dynamic Light Scattering (DDLS) enables to measure rotational and translational diffusion of nanoparticles suspended in solution. The particle size, shape, diffusion, and interactions can then be inferred from the DDLS data using various models of diffusion. Incorporating the technique of DDLS to analyze the dimensions of easily imaged elongated particles, such as Iron (III) oxyhydroxide (FeOOH) Spindles and gold Nanorods, allows testing of the models for rotational and translational diffusion of elongated particles in solution. This, in turn, can help to better interpret DDLS data on hard-to-image anisotropic wet systems such as micelles, microgels, and protein complexes. This study focused on FeOOH Spindles and gold nanorod particles. The light scattering results on FeOOH analyzed using the basic model of non-interacting prolate ellipsoids yielded dimensions within 17% of the SEM measured dimensions. The dimensions of gold nanorod obtained from the straight cylinder model of DDLS data provided results within 25% of the sizes that were obtained from TEM. The nanorod DDLS data was also analyzed by a spherocylinder model.
Stochastic analysis of particle movement over a dune bed
Lee, Baum K.; Jobson, Harvey E.
1977-01-01
Stochastic models are available that can be used to predict the transport and dispersion of bed-material sediment particles in an alluvial channel. These models are based on the proposition that the movement of a single bed-material sediment particle consists of a series of steps of random length separated by rest periods of random duration and, therefore, application of the models requires a knowledge of the probability distributions of the step lengths, the rest periods, the elevation of particle deposition, and the elevation of particle erosion. The procedure was tested by determining distributions from bed profiles formed in a large laboratory flume with a coarse sand as the bed material. The elevation of particle deposition and the elevation of particle erosion can be considered to be identically distributed, and their distribution can be described by either a ' truncated Gaussian ' or a ' triangular ' density function. The conditional probability distribution of the rest period given the elevation of particle deposition closely followed the two-parameter gamma distribution. The conditional probability distribution of the step length given the elevation of particle erosion and the elevation of particle deposition also closely followed the two-parameter gamma density function. For a given flow, the scale and shape parameters describing the gamma probability distributions can be expressed as functions of bed-elevation. (Woodard-USGS)
Optical modeling of volcanic ash particles using ellipsoids
NASA Astrophysics Data System (ADS)
Merikallio, Sini; Muñoz, Olga; Sundström, Anu-Maija; Virtanen, Timo H.; Horttanainen, Matti; de Leeuw, Gerrit; Nousiainen, Timo
2015-05-01
The single-scattering properties of volcanic ash particles are modeled here by using ellipsoidal shapes. Ellipsoids are expected to improve the accuracy of the retrieval of aerosol properties using remote sensing techniques, which are currently often based on oversimplified assumptions of spherical ash particles. Measurements of the single-scattering optical properties of ash particles from several volcanoes across the globe, including previously unpublished measurements from the Eyjafjallajökull and Puyehue volcanoes, are used to assess the performance of the ellipsoidal particle models. These comparisons between the measurements and the ellipsoidal particle model include consideration of the whole scattering matrix, as well as sensitivity studies on the point of view of the Advanced Along Track Scanning Radiometer (AATSR) instrument. AATSR, which flew on the ENVISAT satellite, offers two viewing directions but no information on polarization, so usually only the phase function is relevant for interpreting its measurements. As expected, ensembles of ellipsoids are able to reproduce the observed scattering matrix more faithfully than spheres. Performance of ellipsoid ensembles depends on the distribution of particle shapes, which we tried to optimize. No single specific shape distribution could be found that would perform superiorly in all situations, but all of the best-fit ellipsoidal distributions, as well as the additionally tested equiprobable distribution, improved greatly over the performance of spheres. We conclude that an equiprobable shape distribution of ellipsoidal model particles is a relatively good, yet enticingly simple, approach for modeling volcanic ash single-scattering optical properties.
Pressure calculation in hybrid particle-field simulations
NASA Astrophysics Data System (ADS)
Milano, Giuseppe; Kawakatsu, Toshihiro
2010-12-01
In the framework of a recently developed scheme for a hybrid particle-field simulation techniques where self-consistent field (SCF) theory and particle models (molecular dynamics) are combined [J. Chem. Phys. 130, 214106 (2009)], we developed a general formulation for the calculation of instantaneous pressure and stress tensor. The expressions have been derived from statistical mechanical definition of the pressure starting from the expression for the free energy functional in the SCF theory. An implementation of the derived formulation suitable for hybrid particle-field molecular dynamics-self-consistent field simulations is described. A series of test simulations on model systems are reported comparing the calculated pressure with those obtained from standard molecular dynamics simulations based on pair potentials.
NASA Astrophysics Data System (ADS)
Buchari, M. A.; Mardiyanto, S.; Hendradjaya, B.
2018-03-01
Finding the existence of software defect as early as possible is the purpose of research about software defect prediction. Software defect prediction activity is required to not only state the existence of defects, but also to be able to give a list of priorities which modules require a more intensive test. Therefore, the allocation of test resources can be managed efficiently. Learning to rank is one of the approach that can provide defect module ranking data for the purposes of software testing. In this study, we propose a meta-heuristic chaotic Gaussian particle swarm optimization to improve the accuracy of learning to rank software defect prediction approach. We have used 11 public benchmark data sets as experimental data. Our overall results has demonstrated that the prediction models construct using Chaotic Gaussian Particle Swarm Optimization gets better accuracy on 5 data sets, ties in 5 data sets and gets worse in 1 data sets. Thus, we conclude that the application of Chaotic Gaussian Particle Swarm Optimization in Learning-to-Rank approach can improve the accuracy of the defect module ranking in data sets that have high-dimensional features.
NASA Astrophysics Data System (ADS)
Kulp-McDowall, Taylor; Ochs, Ian; Fisch, Nathaniel
2016-10-01
A particle pusher was constructed in MATLAB using a fourth order Runge-Kutta algorithm to investigate the wave-particle interactions within theoretical models of the MCMF. The model simplified to a radial electric field and a magnetic field focused in the z direction. Studies on an average velocity calculation were conducted in order to test the program's behavior in the large radius limit. The results verified that the particle pusher was behaving correctly. Waves were then simulated on the rotating particles with a periodic divergenceless perturbation in the Bz component of the magnetic field. Preliminary runs indicate an agreement of the particle's motion with analytical predictions-ie. cyclic contractions of the doubly rotating particle's gyroradius.The next stage of the project involves the implementation of particle collisions and turbulence within the particle pusher in order to increase its accuracy and applicability. This will allow for a further investigation of the alpha channeling electrode replacement thesis first proposed by Abraham Fetterman in 2011. Made possible by Grants from the Princeton Environmental Institute (PEI) and the Program for Plasma Science and Technology (PPST).
Jet penetration into a riser operated in dense suspension upflow: experimental and model comparisons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadle, L.J.; Ludlow, C.J.; Spenik, J.L.
2008-05-13
Solids tracers were used to characterize the penetration of a gas-solids jet directed toward the center of the 0.3-m diameter, circulating fluidized bed (CFB) riser. The penetration was measured by tracking phosphorescent particles illuminated immediately prior to injection into the riser. Photosensors and piezoelectric detectors were traversed across the radius of the riser at various axial positions to detect the phosphorescent jet material and particles traveling in the radial direction. Local particle velocities were measured at various radial positions, riser heights, and azimuthal angles using an optical fiber probe. Four (4) variables were tested including the jet velocity, solids feedmore » rate into the jet, the riser velocity, and overall CFB circulation rate over 8 distinct test cases with the central, or base case, repeated each time the test series was conducted. In addition to the experimental measurements made, the entire riser with a side feed jet of solids was simulated using the Eulerian-Eulerian computer model MFIX.« less
Mobility of Nanoscale and Microscale iron for groundwater remediation: experiments and modelling
NASA Astrophysics Data System (ADS)
Tosco, T.; Gastone, F.; Sethi, R.
2012-12-01
Colloidal suspensions of zerovalent iron micro- and nanoparticles (MZVI and NZVI) have been studied in recent years for in-situ groundwater remediation. Thanks to their small size, MZVI and NZVI can be dispersed in aqueous suspensions and directly injected into the subsurface, for a targeted treatment of contamination plumes and even sources. However, colloidal dispersions of such particles are not stable in pure water, due to fast aggregation (for NZVI) and gravitational sedimentation (for MZVI). Viscous, environmentally friendly fluids (guar gum and xanthan gum solutions), which exhibit shear thinning rheological properties, were found to be effective in improving colloidal stability, thus greatly improving handling and injectability (1-3). The present work reports laboratory tests and numerical modelling concerning the mobility of MZVI and NZVI viscous suspensions in porous media. The efficacy of xanthan and guar gum was investigated in column transport tests, performed injecting highly concentrated iron suspensions (20 g/L), dispersed in xanthan gum (3g/L) and guar gum (3-6 g/l) solutions. Particle breakthrough curves and concentration profiles were monitored by magnetic susceptibility measurements. Pressure drop at column ends was also continuously monitored. The tests proved that green polymers can greatly improve both colloidal stability and mobility of the particles. Their use is fundamental in particular for MZVI, which cannot be transported nor even dispersed in pure water. A numerical model for NZVI and NZVI transport in porous media was then developed (E-MNM1D, Enhanced Micro-and Nanoparticle transport Model in porous media in 1D geometry) (4). Due to the high concentration of the particles and to the non-Newtonian rheology of the carrier fluid, hydrodynamic parameters, fluid properties and concentration of deposed and suspended particles are mutually influenced. The rheological properties of the suspensions are accounted for through a variable viscosity, function of flow rate and on polymer and particle concentrations. The particle-porous medium interactions are modelled with a dual-site approach, accounting for straining and physico-chemical deposition/release phenomena. A general formulation for reversible deposition is also proposed, that includes all commonly applied dynamics (linear attachment, blocking, ripening). The progressive clogging of the porous medium, due to deposition and filtration of particles and aggregates, is modelled by tying porosity and permeability to deposited iron particles. E-MNM1D can be downloaded at www.polito.itgroundwatersoftware. The software is designed as a tool for inverse modelling of laboratory transport tests, and as a support in the design of field-scale applications of MZVI and NZVI-based remediation, in particular for the estimate of the radius of influence of the slurry injection. The work was partly funded by the European Union project AQUAREHAB (FP7 - Grant Agreement Nr. 226565). References 1. Tiraferri, A.; Sethi, R. Journal of Nanoparticle Research 2009, 11(3), 635-645. 2. Tiraferri, A.; Chen, K.L.; Sethi, R.; Elimelech, M. Journal of Colloid and Interface Science 2008, 324(1-2), 71-79. 3. Dalla Vecchia, E.; Luna, M.; Sethi, R. Environmental Science & Technology 2009, 43(23), 8942-8947. 4. Tosco, T.; Sethi, R. Environmental Science and Technology 2010, 44(23), 9062-9068.
Acceleration of low-energy ions at parallel shocks with a focused transport model
Zuo, Pingbing; Zhang, Ming; Rassoul, Hamid K.
2013-04-10
Here, we present a test particle simulation on the injection and acceleration of low-energy suprathermal particles by parallel shocks with a focused transport model. The focused transport equation contains all necessary physics of shock acceleration, but avoids the limitation of diffusive shock acceleration (DSA) that requires a small pitch angle anisotropy. This simulation verifies that the particles with speeds of a fraction of to a few times the shock speed can indeed be directly injected and accelerated into the DSA regime by parallel shocks. At higher energies starting from a few times the shock speed, the energy spectrum of acceleratedmore » particles is a power law with the same spectral index as the solution of standard DSA theory, although the particles are highly anisotropic in the upstream region. The intensity, however, is different from that predicted by DSA theory, indicating a different level of injection efficiency. It is found that the shock strength, the injection speed, and the intensity of an electric cross-shock potential (CSP) jump can affect the injection efficiency of the low-energy particles. A stronger shock has a higher injection efficiency. In addition, if the speed of injected particles is above a few times the shock speed, the produced power-law spectrum is consistent with the prediction of standard DSA theory in both its intensity and spectrum index with an injection efficiency of 1. CSP can increase the injection efficiency through direct particle reflection back upstream, but it has little effect on the energetic particle acceleration once the speed of injected particles is beyond a few times the shock speed. This test particle simulation proves that the focused transport theory is an extension of DSA theory with the capability of predicting the efficiency of particle injection.« less
NASA Astrophysics Data System (ADS)
He, Shuangyan; Zhang, Xiaodong; Xiong, Yuanheng; Gray, Deric
2017-11-01
The subsurface remote sensing reflectance (rrs, sr-1), particularly its bidirectional reflectance distribution function (BRDF), depends fundamentally on the angular shape of the volume scattering functions (VSFs, m-1 sr-1). Recent technological advancement has greatly expanded the collection, and the knowledge of natural variability, of the VSFs of oceanic particles. This allows us to test the Zaneveld's theoretical rrs model that explicitly accounts for particle VSF shapes. We parameterized the rrs model based on HydroLight simulations using 114 VSFs measured in three coastal waters around the United States and in oceanic waters of North Atlantic Ocean. With the absorption coefficient (a), backscattering coefficient (bb), and VSF shape as inputs, the parameterized model is able to predict rrs with a root mean square relative error of ˜4% for solar zenith angles from 0 to 75°, viewing zenith angles from 0 to 60°, and viewing azimuth angles from 0 to 180°. A test with the field data indicates the performance of our model, when using only a and bb as inputs and selecting the VSF shape using bb, is comparable to or slightly better than the currently used models by Morel et al. and Lee et al. Explicitly expressing VSF shapes in rrs modeling has great potential to further constrain the uncertainty in the ocean color studies as our knowledge on the VSFs of natural particles continues to improve. Our study represents a first effort in this direction.
NASA Astrophysics Data System (ADS)
Edwards, L. L.; Harvey, T. F.; Freis, R. P.; Pitovranov, S. E.; Chernokozhin, E. V.
1992-10-01
The accuracy associated with assessing the environmental consequences of an accidental release of radioactivity is highly dependent on our knowledge of the source term characteristics and, in the case when the radioactivity is condensed on particles, the particle size distribution, all of which are generally poorly known. This paper reports on the development of a numerical technique that integrates the radiological measurements with atmospheric dispersion modeling. This results in a more accurate particle-size distribution and particle injection height estimation when compared with measurements of high explosive dispersal of (239)Pu. The estimation model is based on a non-linear least squares regression scheme coupled with the ARAC three-dimensional atmospheric dispersion models. The viability of the approach is evaluated by estimation of ADPIC model input parameters such as the ADPIC particle size mean aerodynamic diameter, the geometric standard deviation, and largest size. Additionally we estimate an optimal 'coupling coefficient' between the particles and an explosive cloud rise model. The experimental data are taken from the Clean Slate 1 field experiment conducted during 1963 at the Tonopah Test Range in Nevada. The regression technique optimizes the agreement between the measured and model predicted concentrations of (239)Pu by varying the model input parameters within their respective ranges of uncertainties. The technique generally estimated the measured concentrations within a factor of 1.5, with the worst estimate being within a factor of 5, very good in view of the complexity of the concentration measurements, the uncertainties associated with the meteorological data, and the limitations of the models. The best fit also suggest a smaller mean diameter and a smaller geometric standard deviation on the particle size as well as a slightly weaker particle to cloud coupling than previously reported.
1974-04-01
described in Section 2.3. 2.1 MODEL FABRICATION AND MOUNTING Camphor and camphor with distributed glass particles were the materials for the low...temperature ablator shape-change models tested in Series I. The models were fabricated by molding the camphor at room temperature and high pressure (20,000 psi...distributed glass particles were produced by thoroughly mixing glass beads, having diameters of 7.5 t 1.5 mils, with the camphor gran- ules prior to
Source apportionment of airborne particulate matter using organic compounds as tracers
NASA Astrophysics Data System (ADS)
Schauer, James J.; Rogge, Wolfgang F.; Hildemann, Lynn M.; Mazurek, Monica A.; Cass, Glen R.; Simoneit, Bernd R. T.
A chemical mass balance receptor model based on organic compounds has been developed that relates source contributions to airborne fine particle mass concentrations. Source contributions to the concentrations of specific organic compounds are revealed as well. The model is applied to four air quality monitoring sites in southern California using atmospheric organic compound concentration data and source test data collected specifically for the purpose of testing this model. The contributions of up to nine primary particle source types can be separately identified in ambient samples based on this method, and approximately 85% of the organic fine aerosol is assigned to primary sources on an annual average basis. The model provides information on source contributions to fine mass concentrations, fine organic aerosol concentrations and individual organic compound concentrations. The largest primary source contributors to fine particle mass concentrations in Los Angeles are found to include diesel engine exhaust, paved road dust, gasoline-powered vehicle exhaust, plus emissions from food cooking and wood smoke, with smaller contribution from tire dust, plant fragments, natural gas combustion aerosol, and cigarette smoke. Once these primary aerosol source contributions are added to the secondary sulfates, nitrates and organics present, virtually all of the annual average fine particle mass at Los Angeles area monitoring sites can be assigned to its source.
Source apportionment of airborne particulate matter using organic compounds as tracers
NASA Astrophysics Data System (ADS)
Schauer, James J.; Rogge, Wolfgang F.; Hildemann, Lynn M.; Mazurek, Monica A.; Cass, Glen R.; Simoneit, Bernd R. T.
A chemical mass balance receptor model based on organic compounds has been developed that relates sours; contributions to airborne fine particle mass concentrations. Source contributions to the concentrations of specific organic compounds are revealed as well. The model is applied to four air quality monitoring sites in southern California using atmospheric organic compound concentration data and source test data collected specifically for the purpose of testing this model. The contributions of up to nine primary particle source types can be separately identified in ambient samples based on this method, and approximately 85% of the organic fine aerosol is assigned to primary sources on an annual average basis. The model provides information on source contributions to fine mass concentrations, fine organic aerosol concentrations and individual organic compound concentrations. The largest primary source contributors to fine particle mass concentrations in Los Angeles are found to include diesel engine exhaust, paved road dust, gasoline-powered vehicle exhaust, plus emissions from food cooking and wood smoke, with smaller contribution:; from tire dust, plant fragments, natural gas combustion aerosol, and cigarette smoke. Once these primary aerosol source contributions are added to the secondary sulfates, nitrates and organics present, virtually all of the annual average fine particle mass at Los Angeles area monitoring sites can be assigned to its source.
NASA Astrophysics Data System (ADS)
Geisler, Taylor; Padhy, Sourav; Shaqfeh, Eric; Iaccarino, Gianluca
2016-11-01
Both the human health benefit and risk from the inhalation of aerosolized medications is often predicted by extrapolating experimental data taken using nonhuman primates to human inhalation. In this study, we employ Large Eddy Simulation to simulate particle-fluid dynamics in realistic upper airway models of both humans and rhesus monkeys. We report laminar-to-turbulent flow transitions triggered by constrictions in the upper trachea and the persistence of unsteadiness into the low Reynolds number bifurcating lower airway. Micro-particle deposition fraction and locations are shown to depend significantly on particle size. In particular, particle filtration in the nasal airways is shown to approach unity for large aerosols (8 microns) or high-rate breathing. We validate the accuracy of LES mean flow predictions using MRV imaging results. Additionally, particle deposition fractions are validated against experiments in 3 model airways.
Modeling of ion acceleration through drift and diffusion at interplanetary shocks
NASA Technical Reports Server (NTRS)
Decker, R. B.; Vlahos, L.
1986-01-01
A test particle simulation designed to model ion acceleration through drift and diffusion at interplanetary shocks is described. The technique consists of integrating along exact particle orbits in a system where the angle between the shock normal and mean upstream magnetic field, the level of magnetic fluctuations, and the energy of injected particles can assume a range of values. The technique makes it possible to study time-dependent shock acceleration under conditions not amenable to analytical techniques. To illustrate the capability of the numerical model, proton acceleration was considered under conditions appropriate for interplanetary shocks at 1 AU, including large-amplitude transverse magnetic fluctuations derived from power spectra of both ambient and shock-associated MHD waves.
Onion-shell model for cosmic ray electrons and radio synchrotron emission in supernova remnants
NASA Technical Reports Server (NTRS)
Beck, R.; Drury, L. O.; Voelk, H. J.; Bogdan, T. J.
1985-01-01
The spectrum of cosmic ray electrons, accelerated in the shock front of a supernova remnant (SNR), is calculated in the test-particle approximation using an onion-shell model. Particle diffusion within the evolving remnant is explicity taken into account. The particle spectrum becomes steeper with increasing radius as well as SNR age. Simple models of the magnetic field distribution allow a prediction of the intensity and spectrum of radio synchrotron emission and their radial variation. The agreement with existing observations is satisfactory in several SNR's but fails in other cases. Radiative cooling may be an important effect, especially in SNR's exploding in a dense interstellar medium.
Pulsar wind model for the spin-down behavior of intermittent pulsars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, L.; Tong, H.; Yan, W. M.
Intermittent pulsars are part-time radio pulsars. They have higher slow down rates in the on state (radio-loud) than in the off state (radio-quiet). This gives evidence that particle wind may play an important role in pulsar spindown. The effect of particle acceleration is included in modeling the rotational energy loss rate of the neutron star. Applying the pulsar wind model to the three intermittent pulsars (PSR B1931+24, PSR J1841–0500, and PSR J1832+0029) allows their magnetic fields and inclination angles to be calculated simultaneously. The theoretical braking indices of intermittent pulsars are also given. In the pulsar wind model, the densitymore » of the particle wind can always be the Goldreich-Julian density. This may ensure that different on states of intermittent pulsars are stable. The duty cycle of particle wind can be determined from timing observations. It is consistent with the duty cycle of the on state. Inclination angle and braking index observations of intermittent pulsars may help to test different models of particle acceleration. At present, the inverse Compton scattering induced space charge limited flow with field saturation model can be ruled out.« less
Pulsar Wind Model for the Spin-down Behavior of Intermittent Pulsars
NASA Astrophysics Data System (ADS)
Li, L.; Tong, H.; Yan, W. M.; Yuan, J. P.; Xu, R. X.; Wang, N.
2014-06-01
Intermittent pulsars are part-time radio pulsars. They have higher slow down rates in the on state (radio-loud) than in the off state (radio-quiet). This gives evidence that particle wind may play an important role in pulsar spindown. The effect of particle acceleration is included in modeling the rotational energy loss rate of the neutron star. Applying the pulsar wind model to the three intermittent pulsars (PSR B1931+24, PSR J1841-0500, and PSR J1832+0029) allows their magnetic fields and inclination angles to be calculated simultaneously. The theoretical braking indices of intermittent pulsars are also given. In the pulsar wind model, the density of the particle wind can always be the Goldreich-Julian density. This may ensure that different on states of intermittent pulsars are stable. The duty cycle of particle wind can be determined from timing observations. It is consistent with the duty cycle of the on state. Inclination angle and braking index observations of intermittent pulsars may help to test different models of particle acceleration. At present, the inverse Compton scattering induced space charge limited flow with field saturation model can be ruled out.
Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models
NASA Astrophysics Data System (ADS)
Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana
2014-05-01
Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.
Sampling, testing and modeling particle size distribution in urban catch basins.
Garofalo, G; Carbone, M; Piro, P
2014-01-01
The study analyzed the particle size distribution of particulate matter (PM) retained in two catch basins located, respectively, near a parking lot and a traffic intersection with common high levels of traffic activity. Also, the treatment performance of a filter medium was evaluated by laboratory testing. The experimental treatment results and the field data were then used as inputs to a numerical model which described on a qualitative basis the hydrological response of the two catchments draining into each catch basin, respectively, and the quality of treatment provided by the filter during the measured rainfall. The results show that PM concentrations were on average around 300 mg/L (parking lot site) and 400 mg/L (road site) for the 10 rainfall-runoff events observed. PM with a particle diameter of <45 μm represented 40-50% of the total PM mass. The numerical model showed that a catch basin with a filter unit can remove 30 to 40% of the PM load depending on the storm characteristics.
Scattering Properties of Large Irregular Cosmic Dust Particles at Visible Wavelengths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escobar-Cerezo, J.; Palmer, C.; Muñoz, O.
The effect of internal inhomogeneities and surface roughness on the scattering behavior of large cosmic dust particles is studied by comparing model simulations with laboratory measurements. The present work shows the results of an attempt to model a dust sample measured in the laboratory with simulations performed by a ray-optics model code. We consider this dust sample as a good analogue for interplanetary and interstellar dust as it shares its refractive index with known materials in these media. Several sensitivity tests have been performed for both structural cases (internal inclusions and surface roughness). Three different samples have been selected tomore » mimic inclusion/coating inhomogeneities: two measured scattering matrices of hematite and white clay, and a simulated matrix for water ice. These three matrices are selected to cover a wide range of imaginary refractive indices. The selection of these materials also seeks to study astrophysical environments of interest such as Mars, where hematite and clays have been detected, and comets. Based on the results of the sensitivity tests shown in this work, we perform calculations for a size distribution of a silicate-type host particle model with inclusions and surface roughness to reproduce the experimental measurements of a dust sample. The model fits the measurements quite well, proving that surface roughness and internal structure play a role in the scattering pattern of irregular cosmic dust particles.« less
NASA Technical Reports Server (NTRS)
Bartkus, Tadas P.; Struk, Peter M.; Tsao, Jen-Ching
2017-01-01
This paper builds on previous work that compares numerical simulations of mixed-phase icing clouds with experimental data. The model couples the thermal interaction between ice particles and water droplets of the icing cloud with the flowing air of an icing wind tunnel for simulation of NASA Glenn Research Centers (GRC) Propulsion Systems Laboratory (PSL). Measurements were taken during the Fundamentals of Ice Crystal Icing Physics Tests at the PSL tunnel in March 2016. The tests simulated ice-crystal and mixed-phase icing that relate to ice accretions within turbofan engines. Experimentally measured air temperature, humidity, total water content, liquid and ice water content, as well as cloud particle size, are compared with model predictions. The model showed good trend agreement with experimentally measured values, but often over-predicted aero-thermodynamic changes. This discrepancy is likely attributed to radial variations that this one-dimensional model does not address. One of the key findings of this work is that greater aero-thermodynamic changes occur when humidity conditions are low. In addition a range of mixed-phase clouds can be achieved by varying only the tunnel humidity conditions, but the range of humidities to generate a mixed-phase cloud becomes smaller when clouds are composed of smaller particles. In general, the model predicted melt fraction well, in particular with clouds composed of larger particle sizes.
Testing of models of VVH particle sources and propagation
NASA Technical Reports Server (NTRS)
Blanford, G. E., Jr.; Friedlander, M. W.; Hoppe, M.; Klarmann, J.; Walker, R. M.; Wefel, J. P.
1974-01-01
For comparisons between theoretical and observed charge spectra of VVH particles to be meaningful, at least two conditions must be met. First, charge resolution must be adequate to separate important groups of nuclei, and there should be no significant systematic errors in the charge scale developed. Second, there must be adequate rejection of slower particles of smaller Z, which have been observed in several flights. Within these conditions, it has been shown that observed features of the charge spectrum are not accidents of the analysis but reflect real variations in the relative abundances that must be explained by any successful model.
Supersymmetric model for dark matter and baryogenesis motivated by the recent CDMS result.
Allahverdi, Rouzbeh; Dutta, Bhaskar; Mohapatra, Rabindra N; Sinha, Kuver
2013-08-02
We discuss a supersymmetric model for cogenesis of dark and baryonic matter where the dark matter (DM) has mass in the 8-10 GeV range as indicated by several direct detection searches, including most recently the CDMS experiment with the desired cross section. The DM candidate is a real scalar field. Two key distinguishing features of the model are the following: (i) in contrast with the conventional weakly interacting massive particle dark matter scenarios where thermal freeze-out is responsible for the observed relic density, our model uses nonthermal production of dark matter after reheating of the Universe caused by moduli decay at temperatures below the QCD phase transition, a feature which alleviates the relic overabundance problem caused by small annihilation cross section of light DM particles and (ii) baryogenesis occurs also at similar low temperatures from the decay of TeV scale mediator particles arising from moduli decay. A possible test of this model is the existence of colored particles with TeV masses accessible at the LHC.
Simulation of 0.3 MWt AFBC test rig burning Turkish lignites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Selcuk, N.; Degirmenci, E.; Oymak, O.
1997-12-31
A system model coupling bed and freeboard models for continuous combustion of lignite particles of wide size distribution burning in their own ash in a fluidized bed combustor was modified to incorporate: (1) a procedure for faster computation of particle size distributions (PSDs) without any sacrifice in accuracy; (2) energy balance on char particles for the determination of variation of temperature with particle size, (3) plug flow assumption for the interstitial gas. An efficient and accurate computer code developed for the solution of the conservation equations for energy and chemical species was applied to the prediction of the behavior ofmore » a 0.3 MWt AFBC test rig burning low quality Turkish lignites. The construction and operation of the test rig was carried out within the scope of a cooperation agreement between Middle East Technical University (METU) and Babcock and Wilcox GAMA (BWG) under the auspices of Canadian International Development Agency (CIDA). Predicted concentration and temperature profiles and particle size distributions of solid streams were compared with measured data and found to be in reasonable agreement. The computer code replaces the conventional numerical integration of the analytical solution of population balance with direct integration in ODE form by using a powerful integrator LSODE (Livermore Solver for Ordinary Differential Equations) resulting in two orders of magnitude decrease in CPU (Central Processing Unit) time.« less
A discrete mesoscopic particle model of the mechanics of a multi-constituent arterial wall.
Witthoft, Alexandra; Yazdani, Alireza; Peng, Zhangli; Bellini, Chiara; Humphrey, Jay D; Karniadakis, George Em
2016-01-01
Blood vessels have unique properties that allow them to function together within a complex, self-regulating network. The contractile capacity of the wall combined with complex mechanical properties of the extracellular matrix enables vessels to adapt to changes in haemodynamic loading. Homogenized phenomenological and multi-constituent, structurally motivated continuum models have successfully captured these mechanical properties, but truly describing intricate microstructural details of the arterial wall may require a discrete framework. Such an approach would facilitate modelling interactions between or the separation of layers of the wall and would offer the advantage of seamless integration with discrete models of complex blood flow. We present a discrete particle model of a multi-constituent, nonlinearly elastic, anisotropic arterial wall, which we develop using the dissipative particle dynamics method. Mimicking basic features of the microstructure of the arterial wall, the model comprises an elastin matrix having isotropic nonlinear elastic properties plus anisotropic fibre reinforcement that represents the stiffer collagen fibres of the wall. These collagen fibres are distributed evenly and are oriented in four directions, symmetric to the vessel axis. Experimental results from biaxial mechanical tests of an artery are used for model validation, and a delamination test is simulated to demonstrate the new capabilities of the model. © 2016 The Author(s).
Magnetic Capture of a Molecular Biomarker from Synovial Fluid in a Rat Model of Knee Osteoarthritis
Yarmola, Elena G.; Shah, Yash; Arnold, David P.; Dobson, Jon; Allen, Kyle D.
2015-01-01
Biomarker development for osteoarthritis (OA) often begins in rodent models, but can be limited by an inability to aspirate synovial fluid from a rodent stifle (similar to the human knee). To address this limitation, we have developed a magnetic nanoparticle-based technology to collect biomarkers from a rodent stifle, termed magnetic capture. Using a common OA biomarker - the c-terminus telopeptide of type II collagen (CTXII) - magnetic capture was optimized in vitro using bovine synovial fluid and then tested in a rat model of knee OA. Anti-CTXII antibodies were conjugated to the surface of superparamagnetic iron oxide-containing polymeric particles. Using these anti-CTXII particles, magnetic capture was able to estimate the level of CTXII in 25 µL aliquots of bovine synovial fluid; and under controlled conditions, this estimate was unaffected by synovial fluid viscosity. Following in vitro testing, anti-CTXII particles were tested in a rat monoiodoacetate model of knee OA. CTXII could be magnetically captured from a rodent stifle without the need to aspirate fluid and showed 10 fold changes in CTXII levels from OA-affected joints relative to contralateral control joints. Combined, these data demonstrate the ability and sensitivity of magnetic capture for post-mortem analysis of OA biomarkers in the rat. PMID:26136062
Magnetic Capture of a Molecular Biomarker from Synovial Fluid in a Rat Model of Knee Osteoarthritis.
Yarmola, Elena G; Shah, Yash; Arnold, David P; Dobson, Jon; Allen, Kyle D
2016-04-01
Biomarker development for osteoarthritis (OA) often begins in rodent models, but can be limited by an inability to aspirate synovial fluid from a rodent stifle (similar to the human knee). To address this limitation, we have developed a magnetic nanoparticle-based technology to collect biomarkers from a rodent stifle, termed magnetic capture. Using a common OA biomarker--the c-terminus telopeptide of type II collagen (CTXII)--magnetic capture was optimized in vitro using bovine synovial fluid and then tested in a rat model of knee OA. Anti-CTXII antibodies were conjugated to the surface of superparamagnetic iron oxide-containing polymeric particles. Using these anti-CTXII particles, magnetic capture was able to estimate the level of CTXII in 25 μL aliquots of bovine synovial fluid; and under controlled conditions, this estimate was unaffected by synovial fluid viscosity. Following in vitro testing, anti-CTXII particles were tested in a rat monoiodoacetate model of knee OA. CTXII could be magnetically captured from a rodent stifle without the need to aspirate fluid and showed tenfold changes in CTXII levels from OA-affected joints relative to contralateral control joints. Combined, these data demonstrate the ability and sensitivity of magnetic capture for post-mortem analysis of OA biomarkers in the rat.
Testing the QGSJET01 and QGSJETII-04 models with the help of atmospheric muons
NASA Astrophysics Data System (ADS)
Dedenko, Leonid G.; Lukyashin, Anton V.; Roganova, Tatiana M.; Fedorova, Galina F.
2017-06-01
More accurate original calculations of the atmospheric vertical muon energy spectra at energies 102 - 105 GeV have been carried out in terms of the QGSJET01 and QGSJETII-04 models. The Gaisser-Honda approximations of the measured energy spectra of primary protons, helium and nitrogen nuclei have been used. The CORSIKA package has been used to simulate cascades in the standard atmosphere induced by different primary particles with various fixed energies E. Statistics of simulated cascades for secondary particles with energies (0.01 - 1) · E was increased up to 106. It has been shown that predictions of the QGSJET01 and QGSJETII-04 models for these muon fluxes are below the data of the classical experiments L3 + Cosmic, MACRO and LVD by factors of ˜ 1.7-2 at energies above 102 GeV. It has been concluded that these tested models underestimate the production of the most energetic secondary particles, namely, π-mesons and K-mesons, in interactions of primary protons and other primary nuclei with nuclei in the atmosphere by the same factors.
Testing of the DPMJET and VENUS hadronic interaction models with help of the atmospheric muons
NASA Astrophysics Data System (ADS)
Dedenko, L. G.; Lukyashin, A. V.; Roganova, T. M.; Fedorova, G. F.
2017-01-01
The more accurate original calculations of the atmospheric vertical muon energy spectra at energies 102 - 105 GeV have been carried out in terms of DPMJET and VENUS models. The Gaisser-Honda approximations of the measured energy spectra of primary protons, helium and nitrogen nuclei have been used. The package CORSIKA has been used to simulate cascades in the standard atmosphere induced by different primary particles with various fixed energies E. Statistics of simulated cascades for secondary particles with energies (0.01-1)·E was increased up to 106. It has been shown that predictions of the DPMJET and VENUS models for these muon fluxes are below the data of the classical experiments L3 + Cosmic, MACRO and LVD by factors of ˜ 1.6-1.95 at energies above 102 GeV. It has been concluded that these tested models underestimate the production of the most energetic secondary particles, namely, π-mesons and K-mesons, in interactions of the primary protons and other primary nuclei with nuclei in the atmosphere by the same factors.
Investigations on the magnetization behavior of magnetic composite particles
NASA Astrophysics Data System (ADS)
Eichholz, Christian; Knoll, Johannes; Lerche, Dietmar; Nirschl, Hermann
2014-11-01
In life sciences the application of surface functionalized magnetic composite particles is establishing in diagnostics and in downstream processing of modern biotechnology. These magnetic composite particles consist of non-magnetic material, e.g. polystyrene, which serves as a matrix for the second magnetic component, usually colloidal magnetite. Because of the multitude of magnetic cores these magnetic beads show a complex magnetization behavior which cannot be described with the available approaches for homogeneous magnetic material. Therefore, in this work a new model for the magnetization behavior of magnetic composite particles is developed. By introducing an effective magnetization and considering an overall demagnetization factor the deviation of the demagnetization of homogeneously magnetized particles is taken into account. Calculated and experimental results show a good agreement which allows for the verification of the adapted model of particle magnetization. Besides, a newly developed magnetic analyzing centrifuge is used for the characterization of magnetic composite particle systems. The experimental results, also used for the model verification, give both, information about the magnetic properties and the interaction behavior of particle systems. By adding further components to the particle solution, such as salts or proteins, industrial relevant systems can be reconstructed. The analyzing tool can be used to adapt industrial processes without time-consuming preliminary tests with large samples in the process equipments.
Dominici, Luca; Guerrera, Elena; Villarini, Milena; Fatigoni, Cristina; Moretti, Massimo; Blasi, Paolo; Monarca, Silvano
2013-01-01
In tunnel construction, workers exposed to dust from blasting, gases, diesel exhausts, and oil mist have shown higher risk for pulmonary diseases. A clear mechanism to explain how these pollutants determine diseases is lacking, and alveolar epithelium's capacity to ingest inhaled fine particles is not well characterized. The objective of this study was to assess the genotoxic effect exerted by fine particles collected in seven tunnels using the cytokinesis-block micronuclei test in an in vitro model on type II lung epithelium A549 cells. For each tunnel, five fractions with different aerodynamic diameters of particulate matter were collected with a multistage cascade sampler. The human epithelial cell line A549 was exposed to 0.2 m(3)/mL equivalent of particulate for 24 h before testing. The cytotoxic effects of particulate matter on A549 cells were also evaluated in two different viability tests. In order to evaluate the cells' ability to take up fine particles, imaging with transmission electron microscopy of cells after exposure to particulate matter was performed. Particle endocytosis after 24 h exposure was observed as intracellular aggregates of membrane-bound particles. This morphologic evidence did not correspond to an increase in genotoxicity detected by the micronucleus test.
Dominici, Luca; Guerrera, Elena; Villarini, Milena; Fatigoni, Cristina; Moretti, Massimo; Blasi, Paolo; Monarca, Silvano
2013-01-01
In tunnel construction, workers exposed to dust from blasting, gases, diesel exhausts, and oil mist have shown higher risk for pulmonary diseases. A clear mechanism to explain how these pollutants determine diseases is lacking, and alveolar epithelium's capacity to ingest inhaled fine particles is not well characterized. The objective of this study was to assess the genotoxic effect exerted by fine particles collected in seven tunnels using the cytokinesis-block micronuclei test in an in vitro model on type II lung epithelium A549 cells. For each tunnel, five fractions with different aerodynamic diameters of particulate matter were collected with a multistage cascade sampler. The human epithelial cell line A549 was exposed to 0.2 m3/mL equivalent of particulate for 24 h before testing. The cytotoxic effects of particulate matter on A549 cells were also evaluated in two different viability tests. In order to evaluate the cells' ability to take up fine particles, imaging with transmission electron microscopy of cells after exposure to particulate matter was performed. Particle endocytosis after 24 h exposure was observed as intracellular aggregates of membrane-bound particles. This morphologic evidence did not correspond to an increase in genotoxicity detected by the micronucleus test. PMID:24069598
Pairwise-interaction extended point-particle model for particle-laden flows
NASA Astrophysics Data System (ADS)
Akiki, G.; Moore, W. C.; Balachandar, S.
2017-12-01
In this work we consider the pairwise interaction extended point-particle (PIEP) model for Euler-Lagrange simulations of particle-laden flows. By accounting for the precise location of neighbors the PIEP model goes beyond local particle volume fraction, and distinguishes the influence of upstream, downstream and laterally located neighbors. The two main ingredients of the PIEP model are (i) the undisturbed flow at any particle is evaluated as a superposition of the macroscale flow and a microscale flow that is approximated as a pairwise superposition of perturbation fields induced by each of the neighboring particles, and (ii) the forces and torque on the particle are then calculated from the undisturbed flow using the Faxén form of the force relation. The computational efficiency of the standard Euler-Lagrange approach is retained, since the microscale perturbation fields induced by a neighbor are pre-computed and stored as PIEP maps. Here we extend the PIEP force model of Akiki et al. [3] with a corresponding torque model to systematically include the effect of perturbation fields induced by the neighbors in evaluating the net torque. Also, we use DNS results from a uniform flow over two stationary spheres to further improve the PIEP force and torque models. We then test the PIEP model in three different sedimentation problems and compare the results against corresponding DNS to assess the accuracy of the PIEP model and improvement over the standard point-particle approach. In the case of two sedimenting spheres in a quiescent ambient the PIEP model is shown to capture the drafting-kissing-tumbling process. In cases of 5 and 80 sedimenting spheres a good agreement is obtained between the PIEP simulation and the DNS. For all three simulations, the DEM-PIEP was able to recreate, to a good extent, the results from the DNS, while requiring only a negligible fraction of the numerical resources required by the fully-resolved DNS.
The structure of evaporating and combusting sprays: Measurements and predictions
NASA Technical Reports Server (NTRS)
Shuen, J. S.; Solomon, A. S. P.; Faeth, G. M.
1982-01-01
An apparatus was constructed to provide measurements in open sprays with no zones of recirculation, in order to provide well-defined conditions for use in evaluating spray models. Measurements were completed in a gas jet, in order to test experimental methods, and are currently in progress for nonevaporating sprays. A locally homogeneous flow (LHF) model where interphase transport rates are assumed to be infinitely fast; a separated flow (SF) model which allows for finite interphase transport rates but neglects effects of turbulent fluctuations on drop motion; and a stochastic SF model which considers effects of turbulent fluctuations on drop motion were evaluated using existing data on particle-laden jets. The LHF model generally overestimates rates of particle dispersion while the SF model underestimates dispersion rates. The stochastic SF flow yield satisfactory predictions except at high particle mass loadings where effects of turbulence modulation may have caused the model to overestimate turbulence levels.
An new MHD/kinetic model for exploring energetic particle production in macro-scale systems
NASA Astrophysics Data System (ADS)
Drake, J. F.; Swisdak, M.; Dahlin, J. T.
2017-12-01
A novel MHD/kinetic model is being developed to explore magneticreconnection and particle energization in macro-scale systems such asthe solar corona and the outer heliosphere. The model blends the MHDdescription with a macro-particle description. The rationale for thismodel is based on the recent discovery that energetic particleproduction during magnetic reconnection is controlled by Fermireflection and Betatron acceleration and not parallel electricfields. Since the former mechanisms are not dependent on kineticscales such as the Debye length and the electron and ion inertialscales, a model that sheds these scales is sufficient for describingparticle acceleration in macro-systems. Our MHD/kinetic model includesmacroparticles laid out on an MHD grid that are evolved with the MHDfields. Crucially, the feedback of the energetic component on the MHDfluid is included in the dynamics. Thus, energy of the total system,the MHD fluid plus the energetic component, is conserved. The systemhas no kinetic scales and therefore can be implemented to modelenergetic particle production in macro-systems with none of theconstraints associated with a PIC model. Tests of the new model insimple geometries will be presented and potential applications will bediscussed.
The accurate representation of aerosols in climate models requires direct ambient measurement of the size- and composition-dependent particle production fluxes. Here, we present the design, testing, and analysis of data collected through the first instrument capable of measuring ...
Autophagy mediated CoCrMo particle-induced peri-implant osteolysis by promoting osteoblast apoptosis
Wang, Zhenheng; Liu, Naicheng; Liu, Kang; Zhou, Gang; Gan, Jingjing; Wang, Zhenzhen; Shi, Tongguo; He, Wei; Wang, Lintao; Guo, Ting; Bao, Nirong; Wang, Rui; Huang, Zhen; Chen, Jiangning; Dong, Lei; Zhao, Jianning; Zhang, Junfeng
2015-01-01
Wear particle-induced osteolysis is the leading cause of aseptic loosening, which is the most common reason for THA (total hip arthroplasty) failure and revision surgery. Although existing studies suggest that osteoblast apoptosis induced by wear debris is involved in aseptic loosening, the underlying mechanism linking wear particles to osteoblast apoptosis remains almost totally unknown. In the present study, we investigated the effect of autophagy on osteoblast apoptosis induced by CoCrMo metal particles (CoPs) in vitro and in a calvarial resorption animal model. Our study demonstrated that CoPs stimulated autophagy in osteoblasts and PIO (particle-induced osteolysis) animal models. Both autophagy inhibitor 3-MA (3-methyladenine) and siRNA of Atg5 could dramatically reduce CoPs-induced apoptosis in osteoblasts. Further, inhibition of autophagy with 3-MA ameliorated the severity of osteolysis in PIO animal models. Moreover, 3-MA also prevented osteoblast apoptosis in an antiautophagic way when tested in PIO model. Collectively, these results suggest that autophagy plays a key role in CoPs-induced osteolysis and that targeting autophagy-related pathways may represent a potential therapeutic approach for treating particle-induced peri-implant osteolysis. PMID:26566231
Winston, Richard B.; Konikow, Leonard F.; Hornberger, George Z.
2018-02-16
In the traditional method of characteristics for groundwater solute-transport models, advective transport is represented by moving particles that track concentration. This approach can lead to global mass-balance problems because in models of aquifers having complex boundary conditions and heterogeneous properties, particles can originate in cells having different pore volumes and (or) be introduced (or removed) at cells representing fluid sources (or sinks) of varying strengths. Use of volume-weighted particles means that each particle tracks solute mass. In source or sink cells, the changes in particle weights will match the volume of water added or removed through external fluxes. This enables the new method to conserve mass in source or sink cells as well as globally. This approach also leads to potential efficiencies by allowing the number of particles per cell to vary spatially—using more particles where concentration gradients are high and fewer where gradients are low. The approach also eliminates the need for the model user to have to distinguish between “weak” and “strong” fluid source (or sink) cells. The new model determines whether solute mass added by fluid sources in a cell should be represented by (1) new particles having weights representing appropriate fractions of the volume of water added by the source, or (2) distributing the solute mass added over all particles already in the source cell. The first option is more appropriate for the condition of a strong source; the latter option is more appropriate for a weak source. At sinks, decisions whether or not to remove a particle are replaced by a reduction in particle weight in proportion to the volume of water removed. A number of test cases demonstrate that the new method works well and conserves mass. The method is incorporated into a new version of the U.S. Geological Survey’s MODFLOW–GWT solute-transport model.
Comparison of particle tracking algorithms in commercial CFD packages: sedimentation and diffusion.
Robinson, Risa J; Snyder, Pam; Oldham, Michael J
2007-05-01
Computational fluid dynamic modeling software has enabled microdosimetry patterns of inhaled toxins and toxicants to be predicted and visualized, and is being used in inhalation toxicology and risk assessment. These predicted microdosimetry patterns in airway structures are derived from predicted airflow patterns within these airways and particle tracking algorithms used in computational fluid dynamics (CFD) software packages. Although these commercial CFD codes have been tested for accuracy under various conditions, they have not been well tested for respiratory flows in general. Nor has their particle tracking algorithm accuracy been well studied. In this study, three software packages, Fluent Discrete Phase Model (DPM), Fluent Fine Particle Model (FPM), and ANSYS CFX, were evaluated. Sedimentation and diffusion were each isolated in a straight tube geometry and tested for accuracy. A range of flow rates corresponding to adult low activity (minute ventilation = 10 L/min) and to heavy exertion (minute ventilation = 60 L/min) were tested by varying the range of dimensionless diffusion and sedimentation parameters found using the Weibel symmetric 23 generation lung morphology. Numerical results for fully developed parabolic and uniform (slip) profiles were compared respectively, to Pich (1972) and Yu (1977) analytical sedimentation solutions. Schum and Yeh (1980) equations for sedimentation were also compared. Numerical results for diffusional deposition were compared to analytical solutions of Ingham (1975) for parabolic and uniform profiles. Significant differences were found among the various CFD software packages and between numerical and analytical solutions. Therefore, it is prudent to validate CFD predictions against analytical solutions in idealized geometry before tackling the complex geometries of the respiratory tract.
Optimized Non-Obstructive Particle Damping (NOPD) Treatment for Composite Honeycomb Structures
NASA Technical Reports Server (NTRS)
Panossian, H.
2008-01-01
Non-Obstructive Particle Damping (NOPD) technology is a passive vibration damping approach whereby metallic or non-metallic particles in spherical or irregular shapes, of heavy or light consistency, and even liquid particles are placed inside cavities or attached to structures by an appropriate means at strategic locations, to absorb vibration energy. The objective of the work described herein is the development of a design optimization procedure and discussion of test results for such a NOPD treatment on honeycomb (HC) composite structures, based on finite element modeling (FEM) analyses, optimization and tests. Modeling and predictions were performed and tests were carried out to correlate the test data with the FEM. The optimization procedure consisted of defining a global objective function, using finite difference methods, to determine the optimal values of the design variables through quadratic linear programming. The optimization process was carried out by targeting the highest dynamic displacements of several vibration modes of the structure and finding an optimal treatment configuration that will minimize them. An optimal design was thus derived and laboratory tests were conducted to evaluate its performance under different vibration environments. Three honeycomb composite beams, with Nomex core and aluminum face sheets, empty (untreated), uniformly treated with NOPD, and optimally treated with NOPD, according to the analytically predicted optimal design configuration, were tested in the laboratory. It is shown that the beam with optimal treatment has the lowest response amplitude. Described below are results of modal vibration tests and FEM analyses from predictions of the modal characteristics of honeycomb beams under zero, 50% uniform treatment and an optimal NOPD treatment design configuration and verification with test data.
Multiscale modeling of particle in suspension with smoothed dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Bian, Xin; Litvinov, Sergey; Qian, Rui; Ellero, Marco; Adams, Nikolaus A.
2012-01-01
We apply smoothed dissipative particle dynamics (SDPD) [Español and Revenga, Phys. Rev. E 67, 026705 (2003)] to model solid particles in suspension. SDPD is a thermodynamically consistent version of smoothed particle hydrodynamics (SPH) and can be interpreted as a multiscale particle framework linking the macroscopic SPH to the mesoscopic dissipative particle dynamics (DPD) method. Rigid structures of arbitrary shape embedded in the fluid are modeled by frozen particles on which artificial velocities are assigned in order to satisfy exactly the no-slip boundary condition on the solid-liquid interface. The dynamics of the rigid structures is decoupled from the solvent by solving extra equations for the rigid body translational/angular velocities derived from the total drag/torque exerted by the surrounding liquid. The correct scaling of the SDPD thermal fluctuations with the fluid-particle size allows us to describe the behavior of the particle suspension on spatial scales ranging continuously from the diffusion-dominated regime typical of sub-micron-sized objects towards the non-Brownian regime characterizing macro-continuum flow conditions. Extensive tests of the method are performed for the case of two/three dimensional bulk particle-system both in Brownian/ non-Brownian environment showing numerical convergence and excellent agreement with analytical theories. Finally, to illustrate the ability of the model to couple with external boundary geometries, the effect of confinement on the diffusional properties of a single sphere within a micro-channel is considered, and the dependence of the diffusion coefficient on the wall-separation distance is evaluated and compared with available analytical results.
Reactive multi-particle collision dynamics with reactive boundary conditions
NASA Astrophysics Data System (ADS)
Sayyidmousavi, Alireza; Rohlf, Katrin
2018-07-01
In the present study, an off-lattice particle-based method called the reactive multi-particle collision (RMPC) dynamics is extended to model reaction-diffusion systems with reactive boundary conditions in which the a priori diffusion coefficient of the particles needs to be maintained throughout the simulation. To this end, the authors have made use of the so-called bath particles whose purpose is only to ensure proper diffusion of the main particles in the system. In order to model partial adsorption by a reactive boundary in the RMPC, the probability of a particle being adsorbed, once it hits the boundary, is calculated by drawing an analogy between the RMPC and Brownian Dynamics. The main advantages of the RMPC compared to other molecular based methods are less computational cost as well as conservation of mass, energy and momentum in the collision and free streaming steps. The proposed approach is tested on three reaction-diffusion systems and very good agreement with the solutions to their corresponding partial differential equations is observed.
Reducing the anisotropy of a Brazilian disc generated in a bonded-particle model
NASA Astrophysics Data System (ADS)
Zhang, Q.; Zhang, X. P.; Ji, P. Q.
2018-03-01
The Brazilian test is a widely used method for determining the tensile strength of rocks and for calibrating parameters in bonded-particle models (BPMs). In previous studies, the Brazilian disc has typically been trimmed from a compacted rectangular specimen. The present study shows that different tensile strength values are obtained depending on the compressive loading direction. Several measures are proposed to reduce the anisotropy of the disc. The results reveal that the anisotropy of the disc is significantly influenced by the compactibility of the specimen from which it is trimmed. A new method is proposed in which the Brazilian disc is directly generated with a particle boundary, effectively reducing the anisotropy. The stiffness (particle and bond) and strength (bond) of the boundary are set at less than and greater than those of the disc assembly, respectively, which significantly decreases the stress concentration at the boundary contacts and prevents breakage of the boundary particle bonds. This leads to a significant reduction in the anisotropy of the disc and the discreteness of the tensile strength. This method is more suitable for carrying out a realistic Brazilian test for homogeneous rock-like material in the BPM.
Nonlinear ball chain waveguides for acoustic emission and ultrasound sensing of ablation
NASA Astrophysics Data System (ADS)
Pearson, Stephen H.
Harsh environment acoustic emission and ultrasonic wave sensing applications often benefit from placing the sensor in a remote and more benign physical location by using waveguides to transmit elastic waves between the structural location under test and the transducer. Waveguides are normally designed to have high fidelity over broad frequency ranges to minimize distortion -- often difficult to achieve in practice. This thesis reports on an examination of using nonlinear ball chain waveguides for the transmission of acoustic emission and ultrasonic waves for the monitoring of thermal protection systems undergoing severe heat loading, leading to ablation and similar processes. Experiments test the nonlinear propagation of solitary, harmonic and mixed harmonic elastic waves through a copper tube filled with steel and elastomer balls and various other waveguides. Triangulation of pencil lead breaks occurs on a steel plate. Data are collected concerning the usage of linear waveguides and a water-cooled linear waveguide. Data are collected from a second water-cooled waveguide monitoring Atmospheric Reentry Materials in UVM's Inductively-Coupled Plasma Torch Facility. The motion of the particles in the dimer waveguides is linearly modeled with a three ball and spring chain model and the results are compared per particle. A theoretical nonlinear model is presented which is capable of exactly modeling the motion of the dimer chains. The shape of the waveform propagating through the dimer chain is modeled in a sonic vacuum. Mechanical pulses of varying time widths and amplitudes are launched into one end of the ball chain waveguide and observed at the other end in both time and frequency domains. Similarly, harmonic and mixed harmonic mechanical loads are applied to one end of the waveguide. Balls of different materials are analyzed and discriminated into categories. A copper tube packed with six steel particles, nine steel or marble particles and a longer copper tube packed with 17 steel particles are studied with a frequency sweep. The deformation experienced by a single steel particle in the dimer chain is approximated. Steel ball waveguides and steel rods are fitted with piezoelectric sensors to monitor the force at different points inside the waveguide during testing. The corresponding frequency responses, including intermodulation products, are compared based on amplitude and preloads. A nonlinear mechanical model describes the motion of the dimer chains in a vacuum. Based on the results of these studies it is anticipated that a nonlinear waveguide will be designed, built, and tested as a possible replacement for the high-fidelity waveguides presently being used in an Inductively Coupled Plasma Torch facility for high heat flux thermal protection system testing. The design is intended to accentuate acoustic emission signals of interest, while suppressing other forms of elastic wave noise.
Volkmann, Niels
2004-01-01
Reduced representation templates are used in a real-space pattern matching framework to facilitate automatic particle picking from electron micrographs. The procedure consists of five parts. First, reduced templates are constructed either from models or directly from the data. Second, a real-space pattern matching algorithm is applied using the reduced representations as templates. Third, peaks are selected from the resulting score map using peak-shape characteristics. Fourth, the surviving peaks are tested for distance constraints. Fifth, a correlation-based outlier screening is applied. Test applications to a data set of keyhole limpet hemocyanin particles indicate that the method is robust and reliable.
NASA Astrophysics Data System (ADS)
Shaposhnikov, Dmitry S.; Rodin, Alexander V.; Medvedev, Alexander S.; Fedorova, Anna A.; Kuroda, Takeshi; Hartogh, Paul
2018-02-01
We present a new implementation of the hydrological cycle scheme into a general circulation model of the Martian atmosphere. The model includes a semi-Lagrangian transport scheme for water vapor and ice and accounts for microphysics of phase transitions between them. The hydrological scheme includes processes of saturation, nucleation, particle growth, sublimation, and sedimentation under the assumption of a variable size distribution. The scheme has been implemented into the Max Planck Institute Martian general circulation model and tested assuming monomodal and bimodal lognormal distributions of ice condensation nuclei. We present a comparison of the simulated annual variations, horizontal and vertical distributions of water vapor, and ice clouds with the available observations from instruments on board Mars orbiters. The accounting for bimodality of aerosol particle distribution improves the simulations of the annual hydrological cycle, including predicted ice clouds mass, opacity, number density, and particle radii. The increased number density and lower nucleation rates bring the simulated cloud opacities closer to observations. Simulations show a weak effect of the excess of small aerosol particles on the simulated water vapor distributions.
Fluctuations, Stratification and Stability in a Liquid Fluidized Bed at Low Reynolds Number
NASA Technical Reports Server (NTRS)
Segre, P. N.; McClymer, J. P.
2004-01-01
The sedimentation dynamics of extremely low polydispersity, non-colloidal, particles are studied in a liquid fluidized bed at low Reynolds number, Re much less than 1. When fluidized, the system reaches a steady state, defined where the local average volume fraction does not vary in time. In steady state, the velocity fluctuations and the particle concentrations are found to strongly depend on height. Using our results, we test a recently developed stability model for steady state sedimentation. The model describes the data well, and shows that in steady state there is a balancing of particle fluxes due to the fluctuations and the concentration gradient. Some results are also presented for the dependence of the concentration gradient in fluidized beds on particle size; the gradients become smaller as the particles become larger and fewer in number.
Numerical sedimentation particle-size analysis using the Discrete Element Method
NASA Astrophysics Data System (ADS)
Bravo, R.; Pérez-Aparicio, J. L.; Gómez-Hernández, J. J.
2015-12-01
Sedimentation tests are widely used to determine the particle size distribution of a granular sample. In this work, the Discrete Element Method interacts with the simulation of flow using the well known one-way-coupling method, a computationally affordable approach for the time-consuming numerical simulation of the hydrometer, buoyancy and pipette sedimentation tests. These tests are used in the laboratory to determine the particle-size distribution of fine-grained aggregates. Five samples with different particle-size distributions are modeled by about six million rigid spheres projected on two-dimensions, with diameters ranging from 2.5 ×10-6 m to 70 ×10-6 m, forming a water suspension in a sedimentation cylinder. DEM simulates the particle's movement considering laminar flow interactions of buoyant, drag and lubrication forces. The simulation provides the temporal/spatial distributions of densities and concentrations of the suspension. The numerical simulations cannot replace the laboratory tests since they need the final granulometry as initial data, but, as the results show, these simulations can identify the strong and weak points of each method and eventually recommend useful variations and draw conclusions on their validity, aspects very difficult to achieve in the laboratory.
Heat transfer analysis of a lab scale solar receiver using the discrete ordinates model
NASA Astrophysics Data System (ADS)
Dordevich, Milorad C. W.
This thesis documents the development, implementation and simulation outcomes of the Discrete Ordinates Radiation Model in ANSYS FLUENT simulating the radiative heat transfer occurring in the San Diego State University lab-scale Small Particle Heat Exchange Receiver. In tandem, it also serves to document how well the Discrete Ordinates Radiation Model results compared with those from the in-house developed Monte Carlo Ray Trace Method in a number of simplified geometries. The secondary goal of this study was the inclusion of new physics, specifically buoyancy. Implementation of an additional Monte Carlo Ray Trace Method software package known as VEGAS, which was specifically developed to model lab scale solar simulators and provide directional, flux and beam spread information for the aperture boundary condition, was also a goal of this study. Upon establishment of the model, test cases were run to understand the predictive capabilities of the model. It was shown that agreement within 15% was obtained against laboratory measurements made in the San Diego State University Combustion and Solar Energy Laboratory with the metrics of comparison being the thermal efficiency and outlet, wall and aperture quartz temperatures. Parametric testing additionally showed that the thermal efficiency of the system was very dependent on the mass flow rate and particle loading. It was also shown that the orientation of the small particle heat exchange receiver was important in attaining optimal efficiency due to the fact that buoyancy induced effects could not be neglected. The analyses presented in this work were all performed on the lab-scale small particle heat exchange receiver. The lab-scale small particle heat exchange receiver is 0.38 m in diameter by 0.51 m tall and operated with an input irradiation flux of 3 kWth and a nominal mass flow rate of 2 g/s with a suspended particle mass loading of 2 g/m3. Finally, based on acumen gained during the implementation and development of the model, a new and improved design was simulated to predict how the efficiency within the small particle heat exchange receiver could be improved through a few simple internal geometry design modifications. It was shown that the theoretical calculated efficiency of the small particle heat exchange receiver could be improved from 64% to 87% with adjustments to the internal geometry, mass flow rate, and mass loading.
Stable thermophoretic trapping of generic particles at low pressures
NASA Astrophysics Data System (ADS)
Fung, Long Fung Frankie
2017-04-01
We demonstrate levitation and three-dimensionally stable trapping of a wide variety of particles in medium vacuum through thermophoresis. Typical sizes of the trapped particles are between 10 μm and 1 mm; air pressure is between 1 and 10 Torr. We describe the experimental setup used to produce the temperature gradient, as well as our procedure for introducing particles into the experimental setup. To determine the levitation force and test various theoretical models, we examine the levitation heights of spherical polyethylene spheres under various conditions. A good agreement with two theoretical models is concluded. Our system offers a platform to discover various thermophoretic phenomena and to simulate dynamics of interacting many-body systems in a microgravity environment. NSF MRSEC Grant No. DMR-1420709.
Kinetics of the chiral phase transition in a linear σ model
NASA Astrophysics Data System (ADS)
Wesp, Christian; van Hees, Hendrik; Meistrenko, Alex; Greiner, Carsten
2018-02-01
We study the dynamics of the chiral phase transition in a linear quark-meson σ model using a novel approach based on semiclassical wave-particle duality. The quarks are treated as test particles in a Monte Carlo simulation of elastic collisions and the coupling to the σ meson, which is treated as a classical field, via a kinetic approach motivated by wave-particle duality. The exchange of energy and momentum between particles and fields is described in terms of appropriate Gaussian wave packets. It has been demonstrated that energy-momentum conservation and the principle of detailed balance are fulfilled, and that the dynamics leads to the correct equilibrium limit. First schematic studies of the dynamics of matter produced in heavy-ion collisions are presented.
NASA Astrophysics Data System (ADS)
Arendt, V.; Shalchi, A.
2018-06-01
We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.
A method for grindability testing using the Scirocco disperser.
Bonakdar, Tina; Ali, Muzammil; Dogbe, Selasi; Ghadiri, Mojtaba; Tinke, Arjen
2016-03-30
In the early stages of development of a new Active Pharmaceutical Ingredient (API), insufficient material quantity is available for addressing processing issues, and it is highly desirable to be able to assess processability issues using the smallest possible powder sample quantity. A good example is milling of new active pharmaceutical ingredients. For particle breakage that is sensitive to strain rate, impact testing is the most appropriate method. However, there is no commercially available single particle impact tester for fine particulate solids. In contrast, dry powder dispersers, such as the Scirocco disperser of the Malvern Mastersizer 2000, are widely available, and can be used for this purpose, provided particle impact velocity is known. However, the distance within which the particles can accelerate before impacting on the bend is very short and different particle sizes accelerate to different velocities before impact. As the breakage is proportional to the square of impact velocity, the interpretation of breakage data is not straightforward and requires an analysis of particle velocity as a function of size, density and shape. We report our work using an integrated experimental and CFD modelling approach to evaluate the suitability of this device as a grindability testing device, with the particle sizing being done immediately following dispersion by laser diffraction. Aspirin, sucrose and α-lactose monohydrate are tested using narrow sieve cuts in order to minimise variations in impact velocity. The tests are carried out at eight different air nozzle pressures. As intuitively expected, smaller particles accelerate faster and impact the wall at a higher velocity compared to the larger particles. However, for a given velocity the extent of breakage of larger particles is larger. Using a numerical simulation based on CFD, the relationship between impact velocity and particle size and density has been established assuming a spherical shape, and using one-way coupling, as the particle concentration is very low. Taking account of these dependencies, a clear unification of the change in the specific surface area as a function of particle size, density and impact velocity is observed, and the slope of the fitted line gives a measure of grindability for each material. The trend of data obtained here matches the one obtained by single particle impact testing. Hence aerodynamic dispersion of solids by the Scirocco disperser can be used to evaluate the ease of grindability of different materials. Copyright © 2016 Elsevier B.V. All rights reserved.
Diffusion rate limitations in actin-based propulsion of hard and deformable particles.
Dickinson, Richard B; Purich, Daniel L
2006-08-15
The mechanism by which actin polymerization propels intracellular vesicles and invasive microorganisms remains an open question. Several recent quantitative studies have examined propulsion of biomimetic particles such as polystyrene microspheres, phospholipid vesicles, and oil droplets. In addition to allowing quantitative measurement of parameters such as the dependence of particle speed on its size, these systems have also revealed characteristic behaviors such a saltatory motion of hard particles and oscillatory deformation of soft particles. Such measurements and observations provide tests for proposed mechanisms of actin-based motility. In the actoclampin filament end-tracking motor model, particle-surface-bound filament end-tracking proteins are involved in load-insensitive processive insertion of actin subunits onto elongating filament plus-ends that are persistently tethered to the surface. In contrast, the tethered-ratchet model assumes working filaments are untethered and the free-ended filaments grow as thermal ratchets in a load-sensitive manner. This article presents a model for the diffusion and consumption of actin monomers during actin-based particle propulsion to predict the monomer concentration field around motile particles. The results suggest that the various behaviors of biomimetic particles, including dynamic saltatory motion of hard particles and oscillatory vesicle deformations, can be quantitatively and self-consistently explained by load-insensitive, diffusion-limited elongation of (+)-end-tethered actin filaments, consistent with predictions of the actoclampin filament-end tracking mechanism.
Explosive particle soil surface dispersion model for detonated military munitions.
Hathaway, John E; Rishel, Jeremy P; Walsh, Marianne E; Walsh, Michael R; Taylor, Susan
2015-07-01
The accumulation of high explosive mass residue from the detonation of military munitions on training ranges is of environmental concern because of its potential to contaminate the soil, surface water, and groundwater. The US Department of Defense wants to quantify, understand, and remediate high explosive mass residue loadings that might be observed on active firing ranges. Previously, efforts using various sampling methods and techniques have resulted in limited success, due in part to the complicated dispersion pattern of the explosive particle residues upon detonation. In our efforts to simulate particle dispersal for high- and low-order explosions on hypothetical firing ranges, we use experimental particle data from detonations of munitions from a 155-mm howitzer, which are common military munitions. The mass loadings resulting from these simulations provide a previously unattained level of detail to quantify the explosive residue source-term for use in soil and water transport models. In addition, the resulting particle placements can be used to test, validate, and optimize particle sampling methods and statistical models as applied to firing ranges. Although the presented results are for a hypothetical 155-mm howitzer firing range, the method can be used for other munition types once the explosive particle characteristics are known.
A Novel Method for Modeling Neumann and Robin Boundary Conditions in Smoothed Particle Hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, Emily M.; Tartakovsky, Alexandre M.; Amon, Cristina
2010-08-26
In this paper we present an improved method for handling Neumann or Robin boundary conditions in smoothed particle hydrodynamics. The Neumann and Robin boundary conditions are common to many physical problems (such as heat/mass transfer), and can prove challenging to model in volumetric modeling techniques such as smoothed particle hydrodynamics (SPH). A new SPH method for diffusion type equations subject to Neumann or Robin boundary conditions is proposed. The new method is based on the continuum surface force model [1] and allows an efficient implementation of the Neumann and Robin boundary conditions in the SPH method for geometrically complex boundaries.more » The paper discusses the details of the method and the criteria needed to apply the model. The model is used to simulate diffusion and surface reactions and its accuracy is demonstrated through test cases for boundary conditions describing different surface reactions.« less
Modelling compressible dense and dilute two-phase flows
NASA Astrophysics Data System (ADS)
Saurel, Richard; Chinnayya, Ashwin; Carmouze, Quentin
2017-06-01
Many two-phase flow situations, from engineering science to astrophysics, deal with transition from dense (high concentration of the condensed phase) to dilute concentration (low concentration of the same phase), covering the entire range of volume fractions. Some models are now well accepted at the two limits, but none are able to cover accurately the entire range, in particular regarding waves propagation. In the present work, an alternative to the Baer and Nunziato (BN) model [Baer, M. R. and Nunziato, J. W., "A two-phase mixture theory for the deflagration-to-detonation transition (DDT) in reactive granular materials," Int. J. Multiphase Flow 12(6), 861 (1986)], initially designed for dense flows, is built. The corresponding model is hyperbolic and thermodynamically consistent. Contrarily to the BN model that involves 6 wave speeds, the new formulation involves 4 waves only, in agreement with the Marble model [Marble, F. E., "Dynamics of a gas containing small solid particles," Combustion and Propulsion (5th AGARD Colloquium) (Pergamon Press, 1963), Vol. 175] based on pressureless Euler equations for the dispersed phase, a well-accepted model for low particle volume concentrations. In the new model, the presence of pressure in the momentum equation of the particles and consideration of volume fractions in the two phases render the model valid for large particle concentrations. A symmetric version of the new model is derived as well for liquids containing gas bubbles. This model version involves 4 characteristic wave speeds as well, but with different velocities. Last, the two sub-models with 4 waves are combined in a unique formulation, valid for the full range of volume fractions. It involves the same 6 wave speeds as the BN model, but at a given point of space, 4 waves only emerge, depending on the local volume fractions. The non-linear pressure waves propagate only in the phase with dominant volume fraction. The new model is tested numerically on various test problems ranging from separated phases in a shock tube to shock-particle cloud interaction. Its predictions are compared to BN and Marble models as well as against experimental data showing clear improvements.
Holmén, Britt A; Qu, Yingge
2004-04-15
The relationships between transient vehicle operation and ultrafine particle emissions are not well-known, especially for low-emission alternative bus technologies such as compressed natural gas (CNG) and diesel buses equipped with particulate filters/traps (TRAP). In this study, real-time particle number concentrations measured on a nominal 5 s average basis using an electrical low pressure impactor (ELPI) for these two bus technologies are compared to that of a baseline catalyst-equipped diesel bus operated on ultralow sulfur fuel (BASE) using dynamometer testing. Particle emissions were consistently 2 orders of magnitude lower for the CNG and TRAP compared to BASE on all driving cycles. Time-resolved total particle numbers were examined in terms of sampling factors identified as affecting the ability of ELPI to quantify the particulate matter number emissions for low-emitting vehicles such as CNG and TRAP as a function of vehicle driving mode. Key factors were instrument sensitivity and dilution ratio, alignment of particle and vehicle operating data, sampling train background particles, and cycle-to-cycle variability due to vehicle, engine, after-treatment, or driver behavior. In-cycle variability on the central business district (CBD) cycle was highest for the TRAP configuration, but this could not be attributed to the ELPI sensitivity issues observed for TRAP-IDLE measurements. Elevated TRAP emissions coincided with low exhaust temperature, suggesting on-road real-world particulate filter performance can be evaluated by monitoring exhaust temperature. Nonunique particle emission maps indicate that measures other than vehicle speed and acceleration are necessary to model disaggregated real-time particle emissions. Further testing on a wide variety of test cycles is needed to evaluate the relative importance of the time history of vehicle operation and the hysteresis of the sampling train/dilution tunnel on ultrafine particle emissions. Future studies should monitor particle emissions with high-resolution real-time instruments and account for the operating regime of the vehicle using time-series analysis to develop predictive number emissions models.
NASA Astrophysics Data System (ADS)
Lou, Wentao; Zhu, Miaoyong
2017-12-01
A computation fluid dynamics-population balance model-simultaneous reaction model (CFD-PBM-SRM) coupled model has been proposed to study the multiphase flow behavior and refining reaction kinetics in a ladle with bottom powder injection, and some new and important phenomena and mechanisms are presented. For the multiphase flow behavior, the effects of bubbly plume flow, powder particle motion, particle-particle collision and growth, particle-bubble collision and adhesion, and powder particle removal into top slag are considered. For the reaction kinetics, the mechanisms of multicomponent simultaneous reactions, including Al, S, Si, Mn, Fe, and O, at the multi-interface, including top slag-liquid steel interface, air-liquid steel interface, powder droplet-liquid steel interface, and bubble-liquid steel interface, are presented, and the effect of sulfur solubility in the powder droplet on the desulfurization is also taken into account. Model validation is carried out using hot tests in a 2-t induction furnace with bottom powder injection. The result shows that the powder particles gradually disperse in the entire furnace; in the vicinity of the bottom slot plugs, the desulfurization product CaS is liquid phase, while in the upper region of the furnace, the desulfurization product CaS is solid phase. The predicted sulfur contents by the present model agree well with the measured data in the 2-t furnace with bottom powder injection.
NASA Astrophysics Data System (ADS)
Veselovskii, I.; Dubovik, O.; Kolgotin, A.; Lapyonok, T.; di Girolamo, P.; Summa, D.; Whiteman, D. N.; Mishchenko, M.; Tanré, D.
2010-11-01
Multiwavelength (MW) Raman lidars have demonstrated their potential to profile particle parameters; however, until now, the physical models used in retrieval algorithms for processing MW lidar data have been predominantly based on the Mie theory. This approach is applicable to the modeling of light scattering by spherically symmetric particles only and does not adequately reproduce the scattering by generally nonspherical desert dust particles. Here we present an algorithm based on a model of randomly oriented spheroids for the inversion of multiwavelength lidar data. The aerosols are modeled as a mixture of two aerosol components: one composed only of spherical and the second composed of nonspherical particles. The nonspherical component is an ensemble of randomly oriented spheroids with size-independent shape distribution. This approach has been integrated into an algorithm retrieving aerosol properties from the observations with a Raman lidar based on a tripled Nd:YAG laser. Such a lidar provides three backscattering coefficients, two extinction coefficients, and the particle depolarization ratio at a single or multiple wavelengths. Simulations were performed for a bimodal particle size distribution typical of desert dust particles. The uncertainty of the retrieved particle surface, volume concentration, and effective radius for 10% measurement errors is estimated to be below 30%. We show that if the effect of particle nonsphericity is not accounted for, the errors in the retrieved aerosol parameters increase notably. The algorithm was tested with experimental data from a Saharan dust outbreak episode, measured with the BASIL multiwavelength Raman lidar in August 2007. The vertical profiles of particle parameters as well as the particle size distributions at different heights were retrieved. It was shown that the algorithm developed provided substantially reasonable results consistent with the available independent information about the observed aerosol event.
Comparison of different methods used in integral codes to model coagulation of aerosols
NASA Astrophysics Data System (ADS)
Beketov, A. I.; Sorokin, A. A.; Alipchenkov, V. M.; Mosunova, N. A.
2013-09-01
The methods for calculating coagulation of particles in the carrying phase that are used in the integral codes SOCRAT, ASTEC, and MELCOR, as well as the Hounslow and Jacobson methods used to model aerosol processes in the chemical industry and in atmospheric investigations are compared on test problems and against experimental results in terms of their effectiveness and accuracy. It is shown that all methods are characterized by a significant error in modeling the distribution function for micrometer particles if calculations are performed using rather "coarse" spectra of particle sizes, namely, when the ratio of the volumes of particles from neighboring fractions is equal to or greater than two. With reference to the problems considered, the Hounslow method and the method applied in the aerosol module used in the ASTEC code are the most efficient ones for carrying out calculations.
The comet Halley meteoroid stream: just one more model
NASA Astrophysics Data System (ADS)
Ryabova, G. O.
2003-05-01
The present attempt to simulate the formation and evolution of the comet Halley meteoroid stream is based on a tentative physical model of dust ejection of large particles from comet Halley. Model streams consisting of 500-5000 test particles have been constructed according to the following ejection scheme. The particles are ejected from the nucleus along the cometary orbit (r < 9 au) within the sunward 70° cone, and the rate of ejection has been taken as proportional to r-4. Two kinds of spherical particles have been considered: 1 and 0.001 g with density equal to 0.25 g cm-3. Ejections have been simulated for 1404 BC, 141 AD and 837 AD. The equations of motion have been numerically integrated using the Everhart procedure. As a result, a complicated fine structure of the comet Halley meteoroid stream, consisting not of filaments but of layers, has been revealed.
High Temperature Falling Particle Receiver (2012 - 2016) - Final DOE Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Clifford K.
The objective of this work was to advance falling particle receiver designs for concentrating solar power applications that will enable higher temperatures (>700 °C) and greater power-cycle efficiencies (≥50% thermal-to-electric). Modeling, design, and testing of components in Phases 1 and 2 led to the successful on-sun demonstration in Phase 3 of the world’s first continuously recirculating high-temperature 1 MW t falling particle receiver that achieved >700 °C particle outlet temperatures at mass flow rates ranging from 1 – 7 kg/s.
Space radiation test model study. Report for 20 May 1985-20 February 1986
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nightingale, R.W.; Chiu, Y.T.; Davidson, G.T.
1986-03-14
Dynamic models of the energetic populations in the outer radiation belts are being developed to better understand the extreme variations of particle flux in response to magnetospheric and solar activity. The study utilizes the SCATHA SC3 high-energy electron data, covering energies from 47 keV to 5 MeV with fine pitch-angle measurements (3 deg field of view) over the L-shell range of 5.3 to 8.7. Butter-fly distributions in the dusk sector signify particle losses due to L shell splitting of the particle-drift orbits and the subsequent scattering of the particles from the orbits by the magnetopause. To model the temporal variationsmore » and diffusion procsses of the particle populations, the data were organized into phase-space distributions, binned according to altitude (L shell), energy, pitch angle, and time. These distributions can then be mapped to the equator and plotted for fixed first and second adiabatic invariants of the inherent particle motion. A new and efficient method for calculating the third adiabatic invariant using a line integral of the relevant magnetic potential at the particle mirror points has been developed and is undergoing testing. This method will provide a useful means of displaying the radial diffusion signatures of the outer radiation belts during the more-active periods when the L shell parameter is not a good concept due to severe drift-shell splitting. The first phase of fitting the energetic-electron phase-space distributions with a combined radial and pitch-angle diffusion formulation is well underway. Bessel functions are being fit to the data in an eigenmode expansion method to determine the diffusion coefficients.« less
NASA Astrophysics Data System (ADS)
Meskhidze, N.; Royalty, T. M.; Phillips, B.; Dawson, K. W.; Petters, M. D.; Reed, R.; Weinstein, J.; Hook, D.; Wiener, R.
2017-12-01
The accurate representation of aerosols in climate models requires direct ambient measurement of the size- and composition-dependent particle production fluxes. Here we present the design, testing, and analysis of data collected through the first instrument capable of measuring hygroscopicity-based, size-resolved particle fluxes using a continuous-flow Hygroscopicity-Resolved Relaxed Eddy Accumulation (Hy-Res REA) technique. The different components of the instrument were extensively tested inside the US Environmental Protection Agency's Aerosol Test Facility for sea-salt and ammoniums sulfate particle fluxes. The new REA system design does not require particle accumulation, therefore avoids the diffusional wall losses associated with long residence times of particles inside the air collectors of the traditional REA devices. The Hy-Res REA system used in this study includes a 3-D sonic anemometer, two fast-response solenoid valves, two Condensation Particle Counters (CPCs), a Scanning Mobility Particle Sizer (SMPS), and a Hygroscopicity Tandem Differential Mobility Analyzer (HTDMA). A linear relationship was found between the sea-salt particle fluxes measured by eddy covariance and REA techniques, with comparable theoretical (0.34) and measured (0.39) proportionality constants. The sea-salt particle detection limit of the Hy-Res REA flux system is estimated to be 6x105 m-2s-1. For the conditions of ammonium sulfate and sea-salt particles of comparable source strength and location, the continuous-flow Hy-Res REA instrument was able to achieve better than 90% accuracy of measuring the sea-salt particle fluxes. In principle, the instrument can be applied to measure fluxes of particles of variable size and distinct hygroscopic properties (i.e., mineral dust, black carbon, etc.).
NASA Astrophysics Data System (ADS)
Torbahn, Lutz; Weuster, Alexander; Handl, Lisa; Schmidt, Volker; Kwade, Arno; Wolf, Dietrich E.
2017-06-01
The interdependency of structure and mechanical features of a cohesive powder packing is on current scientific focus and far from being well understood. Although the Discrete Element Method provides a well applicable and widely used tool to model powder behavior, non-trivial contact mechanics of micron-sized particles demand a sophisticated contact model. Here, a direct comparison between experiment and simulation on a particle level offers a proper approach for model validation. However, the simulation of a full scale shear-tester experiment with micron-sized particles, and hence, validating this simulation remains a challenge. We address this task by down scaling the experimental setup: A fully functional micro shear-tester was developed and implemented into an X-ray tomography device in order to visualize the sample on a bulk and particle level within small bulk volumes of the order of a few micro liter under well-defined consolidation. Using spherical micron-sized particles (30 μm), shear tests with a particle number accessible for simulations can be performed. Moreover, particle level analysis allows for a direct comparison of experimental and numerical results, e.g., regarding structural evolution. In this talk, we focus on density inhomogeneity and shear induced heterogeneity during compaction and shear deformation.
NASA Technical Reports Server (NTRS)
Mason, G. M.; Ng, C. K.; Klecker, B.; Green, G.
1989-01-01
Impulsive solar energetic particle (SEP) events are studied to: (1) describe a distinct class of SEP ion events observed in interplanetary space, and (2) test models of focused transport through detailed comparisons of numerical model prediction with the data. An attempt will also be made to describe the transport and scattering properties of the interplanetary medium during the times these events are observed and to derive source injection profiles in these events. ISEE 3 and Helios 1 magnetic field and plasma data are used to locate the approximate coronal connection points of the spacecraft to organize the particle anisotropy data and to constrain some free parameters in the modeling of flare events.
Numerical simulation of failure behavior of granular debris flows based on flume model tests.
Zhou, Jian; Li, Ye-xun; Jia, Min-cai; Li, Cui-na
2013-01-01
In this study, the failure behaviors of debris flows were studied by flume model tests with artificial rainfall and numerical simulations (PFC(3D)). Model tests revealed that grain sizes distribution had profound effects on failure mode, and the failure in slope of medium sand started with cracks at crest and took the form of retrogressive toe sliding failure. With the increase of fine particles in soil, the failure mode of the slopes changed to fluidized flow. The discrete element method PFC(3D) can overcome the hypothesis of the traditional continuous medium mechanic and consider the simple characteristics of particle. Thus, a numerical simulations model considering liquid-solid coupled method has been developed to simulate the debris flow. Comparing the experimental results, the numerical simulation result indicated that the failure mode of the failure of medium sand slope was retrogressive toe sliding, and the failure of fine sand slope was fluidized sliding. The simulation result is consistent with the model test and theoretical analysis, and grain sizes distribution caused different failure behavior of granular debris flows. This research should be a guide to explore the theory of debris flow and to improve the prevention and reduction of debris flow.
NASA Astrophysics Data System (ADS)
Guzmán, R. E.; Hernández Arroyo, E.
2016-02-01
The properties of a metallic matrix composites materials (MMC's) reinforced with particles can be affected by different events occurring within the material in a manufacturing process. The existence of residual stresses resulting from the manufacturing process of these materials (MMC's) can markedly differentiate the curves obtained in tensile tests obtained from compression tests. One of the themes developed in this work is the influence of residual stresses on the mechanical behaviour of these materials. The objective of this research work presented is numerically estimate the thermal residual stresses using a unit cell model for the Mg ZC71 alloy reinforced with SiC particles with volume fraction of 12% (hot-forging technology). The MMC's microstructure is represented as a three dimensional prismatic cube-shaped with a cylindrical reinforcing particle located in the centre of the prism. These cell models are widely used in predicting stress/strain behaviour of MMC's materials, in this analysis the uniaxial stress/strain response of the composite can be obtained through the calculation using the commercial finite-element code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willitsford, Adam H.; Brown, David M.; Brown, Andrea M.
2014-08-28
Multi-wavelength laser transmittance was measured during a series of open-air propellant burn tests at Alliant Techsystems, Inc., in Elkton, MD, in May 2012. A Mie scattering model was combined with an alumina optical properties model in a simple single-scatter approach to fitting plume transmittance. Wavelength-dependent plume transmission curves were fit to the measured multi-wave- length transmittance data to infer plume particle size distributions at several heights in the plume. Tri-modal lognormal distributions described transmittance data well at all heights. Overall distributions included a mode with nanometer-scale diameter, a second mode at a diameter of ~0.5 µm, and a third, largermore » particle mode. Larger parti- cles measured 2.5 µm in diameter at 34 cm (14 in.) above the burning propellant surface, but grew to 4 µm in diameter at a height of 57 cm (22 in.), indicative of particle agglomeration in progress as the plume rises. This report presents data, analysis, and results from the study.« less
Transport and mixing in strongly coupled dusty plasma medium
NASA Astrophysics Data System (ADS)
Dharodi, Vikram; Das, Amita; Patel, Bhavesh
2016-10-01
The generalized hydrodynamic (GHD) fluid model has been employed to study the transport and mixing properties of Dusty plasma medium in strong coupling limit. The response of lighter electron and ion species to the dust motion is taken to be instantaneous i.e. inertia-less. Thus the electron and ion density are presumed to follow the Boltzman relation. In the incompressible limit (i-GHD) the model supports Transverse Shear wave in contrast to the Hydrodynamic fluids. It has been shown that the presence of these waves leads to a better mixing of fluid in this case. Several cases of flow configuration have been considered for the study. The transport and mixing attributes have been quantified by studying the dynamical evolution of tracer particles in the system. The diffusion and clustering of these test particles are directly linked to the mixing characteristic of a medium. The displacement of these particles provides for a quantitative estimate of the diffusion coefficient of the medium. It is shown that these test particles often organize themselves in spatially inhomogeneous pattern leading to the phenomena of clustering.
Acid-degradable polyurethane particles for protein-based vaccines
Bachelder, Eric M.; Beaudette, Tristan T.; Broaders, Kyle E.; Paramonov, Sergey E.; Dashe, Jesse; Fréchet, Jean M. J.
2009-01-01
Acid-degradable particles containing a model protein antigen, ovalbumin, were prepared from a polyurethane with acetal moieties embedded throughout the polymer, and characterized by dynamic light scattering and transmission electron microscopy. The small molecule degradation by-product of the particles was synthesized and tested in vitro for toxicity indicating an LC50 of 12,500 μg/ml. A new liquid chromatography-mass spectrometry technique was developed to monitor the in vitro degradation of these particles. The degradation by-product inside RAW macrophages was at its highest level after 24 hours of culture and was efficiently exocytosed until it was no longer detectable after four days. When tested in vitro, these particles induced a substantial increase in the presentation of the immunodominant ovalbumin-derived peptide SIINFEKL in both macrophages and dendritic cells. In addition, vaccination with these particles generated a cytotoxic T-lymphocyte response that was superior to both free ovalbumin and particles made from an analogous but slower-degrading acid-labile polyurethane polymer. Overall, we present a fully degradable polymer system with non-toxic by-products, which may find use in various biomedical applications including protein-based vaccines. PMID:18710254
Zhao, Yu; Wang, Fang; Zhao, Jianing
2015-10-20
Size-resolved deposition rates and Brownian coagulation of particles between 20 and 900 nm (mobility diameter) were estimated in a well-mixed environmental chamber from a gasoline vehicle exhaust with a total peak particle concentration of 10(5)-10(6) particles/cm(3) at 12.24-25.22 °C. A deposition theory with modified friction velocity and coagulation model was also employed to predict particle concentration decay. Initially during particle decay, approximately 85% or more of the particles had diameters of <100 nm. Particle deposition rates with standard deviations were highly dependent on particle size ranges, and varied from 0.012 ± 0.003 to 0.48 ± 0.02 h(-1). In the experiment, the friction velocity obtained was in the range 1.5-2.5 cm/s. The most explainable fractal dimension and Hamaker constant in coagulation model were 2.5-3 and 20 kT, respectively, and the contribution from coagulation dominated the total particle decay during the first 1 h of decay. It is considered that the modified friction velocity and best fitted fractal dimension and Hamaker constants could be further used to analyze gasoline vehicle exhaust particle dynamics and assess human exposure to vehicle particle pollutants in urban areas, tunnels, and underground parking lots.
Test of a Nb thin film superconducting detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lacquaniti, V.; Maggi, S.; Menichetti, E.
1993-08-01
Results from tests of several Nb thin film microstrip superconducting detectors are reported. A preliminary measurement of critical radius of the hot spot generated by 5 MeV [alpha]-particles is compared with simple model predictions.
Spacecraft Fire Detection: Smoke Properties and Transport in Low-Gravity
NASA Technical Reports Server (NTRS)
Urban, David L.; Ruff, Gary A.; Brooker, John E.; Cleary, Thomas; Yang, Jiann; Mulholland, George; Yuan, Zeng-guang
2007-01-01
Results from a recent smoke particle size measurement experiment conducted on the International Space Station (ISS) are presented along with the results from a model of the transport of smoke in the ISS. The experimental results show that, for the materials tested, a substantial portion of the smoke particles are below 500 nm in diameter. The smoke transport model demonstrated that mixing dominates the smoke transport and that consequently detection times are longer than in normal gravity.
EFFECTS OF DYNAMICAL EVOLUTION OF GIANT PLANETS ON SURVIVAL OF TERRESTRIAL PLANETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsumura, Soko; Ida, Shigeru; Nagasawa, Makiko
2013-04-20
The orbital distributions of currently observed extrasolar giant planets allow marginally stable orbits for hypothetical, terrestrial planets. In this paper, we propose that many of these systems may not have additional planets on these ''stable'' orbits, since past dynamical instability among giant planets could have removed them. We numerically investigate the effects of early evolution of multiple giant planets on the orbital stability of the inner, sub-Neptune-like planets which are modeled as test particles, and determine their dynamically unstable region. Previous studies have shown that the majority of such test particles are ejected out of the system as a resultmore » of close encounters with giant planets. Here, we show that secular perturbations from giant planets can remove test particles at least down to 10 times smaller than their minimum pericenter distance. Our results indicate that, unless the dynamical instability among giant planets is either absent or quiet like planet-planet collisions, most test particles down to {approx}0.1 AU within the orbits of giant planets at a few AU may be gone. In fact, out of {approx}30% of survived test particles, about three quarters belong to the planet-planet collision cases. We find a good agreement between our numerical results and the secular theory, and present a semi-analytical formula which estimates the dynamically unstable region of the test particles just from the evolution of giant planets. Finally, our numerical results agree well with the observations, and also predict the existence of hot rocky planets in eccentric giant planet systems.« less
NASA Astrophysics Data System (ADS)
Liu, Zhongqiu; Li, Linmin; Li, Baokuan; Jiang, Maofa
2014-07-01
The current study developed a coupled computational model to simulate the transient fluid flow, solidification, and particle transport processes in a slab continuous-casting mold. Transient flow of molten steel in the mold is calculated using the large eddy simulation. An enthalpy-porosity approach is used for the analysis of solidification processes. The transport of bubble and non-metallic inclusion inside the liquid pool is calculated using the Lagrangian approach based on the transient flow field. A criterion of particle entrapment in the solidified shell is developed using the user-defined functions of FLUENT software (ANSYS, Inc., Canonsburg, PA). The predicted results of this model are compared with the measurements of the ultrasonic testing of the rolled steel plates and the water model experiments. The transient asymmetrical flow pattern inside the liquid pool exhibits quite satisfactory agreement with the corresponding measurements. The predicted complex instantaneous velocity field is composed of various small recirculation zones and multiple vortices. The transport of particles inside the liquid pool and the entrapment of particles in the solidified shell are not symmetric. The Magnus force can reduce the entrapment ratio of particles in the solidified shell, especially for smaller particles, but the effect is not obvious. The Marangoni force can play an important role in controlling the motion of particles, which increases the entrapment ratio of particles in the solidified shell obviously.
Comparison of Machine Learning methods for incipient motion in gravel bed rivers
NASA Astrophysics Data System (ADS)
Valyrakis, Manousos
2013-04-01
Soil erosion and sediment transport of natural gravel bed streams are important processes which affect both the morphology as well as the ecology of earth's surface. For gravel bed rivers at near incipient flow conditions, particle entrainment dynamics are highly intermittent. This contribution reviews the use of modern Machine Learning (ML) methods implemented for short term prediction of entrainment instances of individual grains exposed in fully developed near boundary turbulent flows. Results obtained by network architectures of variable complexity based on two different ML methods namely the Artificial Neural Network (ANN) and the Adaptive Neuro-Fuzzy Inference System (ANFIS) are compared in terms of different error and performance indices, computational efficiency and complexity as well as predictive accuracy and forecast ability. Different model architectures are trained and tested with experimental time series obtained from mobile particle flume experiments. The experimental setup consists of a Laser Doppler Velocimeter (LDV) and a laser optics system, which acquire data for the instantaneous flow and particle response respectively, synchronously. The first is used to record the flow velocity components directly upstream of the test particle, while the later tracks the particle's displacements. The lengthy experimental data sets (millions of data points) are split into the training and validation subsets used to perform the corresponding learning and testing of the models. It is demonstrated that the ANFIS hybrid model, which is based on neural learning and fuzzy inference principles, better predicts the critical flow conditions above which sediment transport is initiated. In addition, it is illustrated that empirical knowledge can be extracted, validating the theoretical assumption that particle ejections occur due to energetic turbulent flow events. Such a tool may find application in management and regulation of stream flows downstream of dams for stream restoration, implementation of sustainable practices in river and estuarine ecosystems and design of stable river bed and banks.
Filtering of windborne particles by a natural windbreak
NASA Astrophysics Data System (ADS)
Bouvet, Thomas; Loubet, Benjamin; Wilson, John D.; Tuzet, Andree
2007-06-01
New measurements of the transport and deposition of artificial heavy particles (glass beads) to a thick ‘shelterbelt’ of maize (width/height ratio W/ H ≈ 1.6) are used to test numerical simulations with a Lagrangian stochastic trajectory model driven by the flow field from a RANS (Reynolds-averaged, Navier-Stokes) wind and turbulence model. We illustrate the ambiguity inherent in applying to such a thick windbreak the pre-existing (Raupach et al. 2001; Atmos. Environ. 35, 3373-3383) ‘thin windbreak’ theory of particle filtering by vegetation, and show that the present description, while much more laborious, provides a reasonably satisfactory account of what was measured. A sizeable fraction of the particle flux entering the shelterbelt across its upstream face is lifted out of its volume by the mean updraft induced by the deceleration of the flow in the near-upstream and entry region, and these particles thereby escape deposition in the windbreak.
Current Fragmentation and Particle Acceleration in Solar Flares
NASA Astrophysics Data System (ADS)
Cargill, P. J.; Vlahos, L.; Baumann, G.; Drake, J. F.; Nordlund, Å.
2012-11-01
Particle acceleration in solar flares remains an outstanding problem in plasma physics and space science. While the observed particle energies and timescales can perhaps be understood in terms of acceleration at a simple current sheet or turbulence site, the vast number of accelerated particles, and the fraction of flare energy in them, defies any simple explanation. The nature of energy storage and dissipation in the global coronal magnetic field is essential for understanding flare acceleration. Scenarios where the coronal field is stressed by complex photospheric motions lead to the formation of multiple current sheets, rather than the single monolithic current sheet proposed by some. The currents sheets in turn can fragment into multiple, smaller dissipation sites. MHD, kinetic and cellular automata models are used to demonstrate this feature. Particle acceleration in this environment thus involves interaction with many distributed accelerators. A series of examples demonstrate how acceleration works in such an environment. As required, acceleration is fast, and relativistic energies are readily attained. It is also shown that accelerated particles do indeed interact with multiple acceleration sites. Test particle models also demonstrate that a large number of particles can be accelerated, with a significant fraction of the flare energy associated with them. However, in the absence of feedback, and with limited numerical resolution, these results need to be viewed with caution. Particle in cell models can incorporate feedback and in one scenario suggest that acceleration can be limited by the energetic particles reaching the condition for firehose marginal stability. Contemporary issues such as footpoint particle acceleration are also discussed. It is also noted that the idea of a "standard flare model" is ill-conceived when the entire distribution of flare energies is considered.
Validation Testing of a Peridynamic Impact Damage Model Using NASA's Micro-Particle Gun
NASA Technical Reports Server (NTRS)
Baber, Forrest E.; Zelinski, Brian J.; Guven, Ibrahim; Gray, Perry
2017-01-01
Through a collaborative effort between the Virginia Commonwealth University and Raytheon, a peridynamic model for sand impact damage has been developed1-3. Model development has focused on simulating impacts of sand particles on ZnS traveling at velocities consistent with aircraft take-off and landing speeds. The model reproduces common features of impact damage including pit and radial cracks, and, under some conditions, lateral cracks. This study focuses on a preliminary validation exercise in which simulation results from the peridynamic model are compared to a limited experimental data set generated by NASA's recently developed micro-particle gun (MPG). The MPG facility measures the dimensions and incoming and rebound velocities of the impact particles. It also links each particle to a specific impact site and its associated damage. In this validation exercise parameters of the peridynamic model are adjusted to fit the experimentally observed pit diameter, average length of radial cracks and rebound velocities for 4 impacts of 300 µm glass beads on ZnS. Results indicate that a reasonable fit of these impact characteristics can be obtained by suitable adjustment of the peridynamic input parameters, demonstrating that the MPG can be used effectively as a validation tool for impact modeling and that the peridynamic sand impact model described herein possesses not only a qualitative but also a quantitative ability to simulate sand impact events.
Absorption of charged particulate surfactants in microfluidics
NASA Astrophysics Data System (ADS)
Kong, Tiantian; Liu, Zhou; Yao, Xiaoxue; Liu, Yaming
2017-11-01
We use microfluidics to uncouple the generation of Pickering emulsion droplets and stability analysis against coalescence. By designing the microchannels, we control the packing time for charged particles arriving at the droplet interfaces, and subsequently test the droplet stability in a coalescence chamber. The critical particle coverage on interfaces that prevents coalescence are estimated by an adsorption model. We further investigate the dependence of the critical particle coverage on its properties such as particle sizes, surface charge densities, and bulk concentrations. Our studies are potentially beneficial to the applications involving particle-stabilized droplets including cosmetics, food products, and oil recovery. NSFC 11504238,JCYJ20160308092144035,2016A050503048.
NASA Astrophysics Data System (ADS)
Pekşen, Ertan; Yas, Türker; Kıyak, Alper
2014-09-01
We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.
A Thermal Precipitator for Fire Characterization Research
NASA Technical Reports Server (NTRS)
Meyer, Marit; Bryg, Vicky
2008-01-01
Characterization of the smoke from pyrolysis of common spacecraft materials provides insight for the design of future smoke detectors and post-fire clean-up equipment on the International Space Station. A thermal precipitator was designed to collect smoke aerosol particles for microscopic analysis in fire characterization research. Information on particle morphology, size and agglomerate structure obtained from these tests supplements additional aerosol data collected. Initial modeling for the thermal precipitator design was performed with the finite element software COMSOL Multiphysics, and includes the flow field and heat transfer in the device. The COMSOL Particle Tracing Module was used to determine particle deposition on SEM stubs which include TEM grids. Modeling provided optimized design parameters such as geometry, flow rate and temperatures. Microscopy results from fire characterization research using the thermal precipitator are presented.
Stable thermophoretic trapping of generic particles at low pressures
NASA Astrophysics Data System (ADS)
Fung, Frankie; Usatyuk, Mykhaylo; DeSalvo, B. J.; Chin, Cheng
2017-01-01
We demonstrate levitation and three-dimensionally stable trapping of a wide variety of particles in a vacuum through thermophoretic force in the presence of a strong temperature gradient. Typical sizes of the trapped particles are between 10 μm and 1 mm at a pressure between 1 and 10 Torr. The trapping stability is provided radially by the increasing temperature field and vertically by the transition from the free molecule to hydrodynamic behavior of thermophoresis as the particles ascend. To determine the levitation force and test various theoretical models, we examine the levitation heights of spherical polyethylene spheres under various conditions. A good agreement with two theoretical models is concluded. Our system offers a platform to discover various thermophoretic phenomena and to simulate dynamics of interacting many-body systems in a microgravity environment.
Reflected Charged Particle Populations around Dipolar Lunar Magnetic Anomalies
NASA Astrophysics Data System (ADS)
Deca, Jan; Divin, Andrey
2016-10-01
In this work we analyze and compare the reflected particle populations for both a horizontal and a vertical dipole model embedded in the lunar surface, representing the solar wind interaction with two different lunar magnetic anomaly (LMA) structures. Using the 3D full-kinetic electromagnetic code iPic3D, in combination with a test-particle approach to generate particle trajectories, we focus on the ion and electron dynamics. Whereas the vertical model electrostatically reflects ions upward under both near-parallel and near-perpendicular angles with respect to the lunar surface, the horizontal model only has a significant shallow component. Characterizing the electron dynamics, we find that the interplay of the mini-magnetosphere electric and magnetic fields is capable of temporarily trapping low-energy electrons and possibly ejecting them upstream. Our results are in agreement with recent high-resolution observations. Low- to medium-altitude ion and electron observations might be excellent indicators to complement orbital magnetic field measurements and better uncover the underlying magnetic field structure. The latter is of particular importance in defining the correlation between LMAs and lunar swirls, and further testing the solar wind shielding hypothesis for albedo markings due to space weathering. Observing more reflected ions does not necessarily point to the existence of a mini-magnetosphere.
REFLECTED CHARGED PARTICLE POPULATIONS AROUND DIPOLAR LUNAR MAGNETIC ANOMALIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deca, Jan; Divin, Andrey
2016-10-01
In this work we analyze and compare the reflected particle populations for both a horizontal and a vertical dipole model embedded in the lunar surface, representing the solar wind interaction with two different lunar magnetic anomaly (LMA) structures. Using the 3D full-kinetic electromagnetic code iPic3D, in combination with a test-particle approach to generate particle trajectories, we focus on the ion and electron dynamics. Whereas the vertical model electrostatically reflects ions upward under both near-parallel and near-perpendicular angles with respect to the lunar surface, the horizontal model only has a significant shallow component. Characterizing the electron dynamics, we find that themore » interplay of the mini-magnetosphere electric and magnetic fields is capable of temporarily trapping low-energy electrons and possibly ejecting them upstream. Our results are in agreement with recent high-resolution observations. Low- to medium-altitude ion and electron observations might be excellent indicators to complement orbital magnetic field measurements and better uncover the underlying magnetic field structure. The latter is of particular importance in defining the correlation between LMAs and lunar swirls, and further testing the solar wind shielding hypothesis for albedo markings due to space weathering. Observing more reflected ions does not necessarily point to the existence of a mini-magnetosphere.« less
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.; Shivarama, Ravishankar
2004-01-01
The hybrid particle-finite element method of Fahrenthold and Horban, developed for the simulation of hypervelocity impact problems, has been extended to include new formulations of the particle-element kinematics, additional constitutive models, and an improved numerical implementation. The extended formulation has been validated in three dimensional simulations of published impact experiments. The test cases demonstrate good agreement with experiment, good parallel speedup, and numerical convergence of the simulation results.
NASA Technical Reports Server (NTRS)
Olson, William S.; Tian, Lin; Grecu, Mircea; Kuo, Kwo-Sen; Johnson, Benjamin; Heymsfield, Andrew J.; Bansemer, Aaron; Heymsfield, Gerald M.; Wang, James R.; Meneghini, Robert
2016-01-01
In this study, two different particle models describing the structure and electromagnetic properties of snow are developed and evaluated for potential use in satellite combined radar-radiometer precipitation estimation algorithms. In the first model, snow particles are assumed to be homogeneous ice-air spheres with single-scattering properties derived from Mie theory. In the second model, snow particles are created by simulating the self-collection of pristine ice crystals into aggregate particles of different sizes, using different numbers and habits of the collected component crystals. Single-scattering properties of the resulting nonspherical snow particles are determined using the discrete dipole approximation. The size-distribution-integrated scattering properties of the spherical and nonspherical snow particles are incorporated into a dual-wavelength radar profiling algorithm that is applied to 14- and 34-GHz observations of stratiform precipitation from the ER-2 aircraft-borne High-Altitude Imaging Wind and Rain Airborne Profiler (HIWRAP) radar. The retrieved ice precipitation profiles are then input to a forward radiative transfer calculation in an attempt to simulate coincident radiance observations from the Conical Scanning Millimeter-Wave Imaging Radiometer (CoSMIR). Much greater consistency between the simulated and observed CoSMIR radiances is obtained using estimated profiles that are based upon the nonspherical crystal/aggregate snow particle model. Despite this greater consistency, there remain some discrepancies between the higher moments of the HIWRAP-retrieved precipitation size distributions and in situ distributions derived from microphysics probe observations obtained from Citation aircraft underflights of the ER-2. These discrepancies can only be eliminated if a subset of lower-density crystal/aggregate snow particles is assumed in the radar algorithm and in the interpretation of the in situ data.
Liquid-Gas-Like Phase Transition in Sand Flow Under Microgravity
NASA Astrophysics Data System (ADS)
Huang, Yu; Zhu, Chongqiang; Xiang, Xiang; Mao, Wuwei
2015-06-01
In previous studies of granular flow, it has been found that gravity plays a compacting role, causing convection and stratification by density. However, there is a lack of research and analysis of the characteristics of different particles' motion under normal gravity contrary to microgravity. In this paper, we conduct model experiments on sand flow using a model test system based on a drop tower under microgravity, within which the characteristics and development processes of granular flow under microgravity are captured by high-speed cameras. The configurations of granular flow are simulated using a modified MPS (moving particle simulation), which is a mesh-free, pure Lagrangian method. Moreover, liquid-gas-like phase transitions in the sand flow under microgravity, including the transitions to "escaped", "jumping", and "scattered" particles are highlighted, and their effects on the weakening of shear resistance, enhancement of fluidization, and changes in particle-wall and particle-particle contact mode are analyzed. This study could help explain the surface geology evolution of small solar bodies and elucidate the nature of granular interaction.
New particle formation and growth from methanesulfonic acid, trimethylamine and water.
Chen, Haihan; Ezell, Michael J; Arquero, Kristine D; Varner, Mychel E; Dawson, Matthew L; Gerber, R Benny; Finlayson-Pitts, Barbara J
2015-05-28
New particle formation from gas-to-particle conversion represents a dominant source of atmospheric particles and affects radiative forcing, climate and human health. The species involved in new particle formation and the underlying mechanisms remain uncertain. Although sulfuric acid is commonly recognized as driving new particle formation, increasing evidence suggests the involvement of other species. Here we study particle formation and growth from methanesulfonic acid, trimethylamine and water at reaction times from 2.3 to 32 s where particles are 2-10 nm in diameter using a newly designed and tested flow system. The flow system has multiple inlets to facilitate changing the mixing sequence of gaseous precursors. The relative humidity and precursor concentrations, as well as the mixing sequence, are varied to explore their effects on particle formation and growth in order to provide insight into the important mechanistic steps. We show that water is involved in the formation of initial clusters, greatly enhancing their formation as well as growth into detectable size ranges. A kinetics box model is developed that quantitatively reproduces the experimental data under various conditions. Although the proposed scheme is not definitive, it suggests that incorporating such mechanisms into atmospheric models may be feasible in the near future.
Advanced Multi-phase Flow CFD Model Development for Solid Rocket Motor Flowfield Analysis
NASA Technical Reports Server (NTRS)
Liaw, Paul; Chen, Yen-Sen
1995-01-01
A Navier-Stokes code, finite difference Navier-Stokes (FDNS), is used to analyze the complicated internal flowfield of the SRM (solid rocket motor) to explore the impacts due to the effects of chemical reaction, particle dynamics, and slag accumulation on the solid rocket motor (SRM). The particulate multi-phase flowfield with chemical reaction, particle evaporation, combustion, breakup, and agglomeration models are included in present study to obtain a better understanding of the SRM design. Finite rate chemistry model is applied to simulate the chemical reaction effects. Hermsen correlation model is used for the combustion simulation. The evaporation model introduced by Spalding is utilized to include the heat transfer from the particulate phase to the gase phase due to the evaporation of the particles. A correlation of the minimum particle size for breakup expressed in terms of the Al/Al2O3 surface tension and shear force was employed to simulate the breakup of particles. It is assumed that the breakup occurs when the Weber number exceeds 6. A simple L agglomeration model is used to investigate the particle agglomeration. However, due to the large computer memory requirements for the agglomeration model, only 2D cases are tested with the agglomeration model. The VOF (Volume of Fluid) method is employed to simulate the slag buildup in the aft-end cavity of the redesigned solid rocket motor (RSRM). Monte Carlo method is employed to calculate the turbulent dispersion effect of the particles. The flowfield analysis obtained using the FDNS code in the present research with finite rate chemical reaction, particle evaporation, combustion, breakup, agglomeration, and VOG models will provide a design guide for the potential improvement of the SRM including the use of materials and the shape of nozzle geometry such that a better performance of the SRM can be achieved. The simulation of the slag buildup in the aft-end cavity can assist the designer to improve the design of the RSRM geometry.
Hadwin, Paul J; Peterson, Sean D
2017-04-01
The Bayesian framework for parameter inference provides a basis from which subject-specific reduced-order vocal fold models can be generated. Previously, it has been shown that a particle filter technique is capable of producing estimates and associated credibility intervals of time-varying reduced-order vocal fold model parameters. However, the particle filter approach is difficult to implement and has a high computational cost, which can be barriers to clinical adoption. This work presents an alternative estimation strategy based upon Kalman filtering aimed at reducing the computational cost of subject-specific model development. The robustness of this approach to Gaussian and non-Gaussian noise is discussed. The extended Kalman filter (EKF) approach is found to perform very well in comparison with the particle filter technique at dramatically lower computational cost. Based upon the test cases explored, the EKF is comparable in terms of accuracy to the particle filter technique when greater than 6000 particles are employed; if less particles are employed, the EKF actually performs better. For comparable levels of accuracy, the solution time is reduced by 2 orders of magnitude when employing the EKF. By virtue of the approximations used in the EKF, however, the credibility intervals tend to be slightly underpredicted.
NASA Astrophysics Data System (ADS)
Herbold, E. B.; Nesterenko, V. F.; Benson, D. J.; Cai, J.; Vecchio, K. S.; Jiang, F.; Addiss, J. W.; Walley, S. M.; Proud, W. G.
2008-11-01
The variation of metallic particle size and sample porosity significantly alters the dynamic mechanical properties of high density granular composite materials processed using a cold isostatically pressed mixture of polytetrafluoroethylene (PTFE), aluminum (Al), and tungsten (W) powders. Quasistatic and dynamic experiments are performed with identical constituent mass fractions with variations in the size of the W particles and pressing conditions. The relatively weak polymer matrix allows the strength and fracture modes of this material to be governed by the granular type behavior of agglomerated metal particles. A higher ultimate compressive strength was observed in relatively high porosity samples with small W particles compared to those with coarse W particles in all experiments. Mesoscale granular force chains of the metallic particles explain this unusual phenomenon as observed in hydrocode simulations of a drop-weight test. Macrocracks forming below the critical failure strain for the matrix and unusual behavior due to a competition between densification and fracture in dynamic tests of porous samples were also observed. Numerical modeling of shock loading of this granular composite material demonstrated that the internal energy, specifically thermal energy, of the soft PTFE matrix can be tailored by the W particle size distribution.
Dust environment of an airless object: A phase space study with kinetic models
NASA Astrophysics Data System (ADS)
Kallio, E.; Dyadechkin, S.; Fatemi, S.; Holmström, M.; Futaana, Y.; Wurz, P.; Fernandes, V. A.; Álvarez, F.; Heilimo, J.; Jarvinen, R.; Schmidt, W.; Harri, A.-M.; Barabash, S.; Mäkelä, J.; Porjo, N.; Alho, M.
2016-01-01
The study of dust above the lunar surface is important for both science and technology. Dust particles are electrically charged due to impact of the solar radiation and the solar wind plasma and, therefore, they affect the plasma above the lunar surface. Dust is also a health hazard for crewed missions because micron and sub-micron sized dust particles can be toxic and harmful to the human body. Dust also causes malfunctions in mechanical devices and is therefore a risk for spacecraft and instruments on the lunar surface. Properties of dust particles above the lunar surface are not fully known. However, it can be stated that their large surface area to volume ratio due to their irregular shape, broken chemical bonds on the surface of each dust particle, together with the reduced lunar environment cause the dust particles to be chemically very reactive. One critical unknown factor is the electric field and the electric potential near the lunar surface. We have developed a modelling suite, Dusty Plasma Environments: near-surface characterisation and Modelling (DPEM), to study globally and locally dust environments of the Moon and other airless bodies. The DPEM model combines three independent kinetic models: (1) a 3D hybrid model, where ions are modelled as particles and electrons are modelled as a charged neutralising fluid, (2) a 2D electrostatic Particle-in-Cell (PIC) model where both ions and electrons are treated as particles, and (3) a 3D Monte Carlo (MC) model where dust particles are modelled as test particles. The three models are linked to each other unidirectionally; the hybrid model provides upstream plasma parameters to be used as boundary conditions for the PIC model which generates the surface potential for the MC model. We have used the DPEM model to study properties of dust particles injected from the surface of airless objects such as the Moon, the Martian moon Phobos and the asteroid RQ36. We have performed a (v0, m/q)-phase space study where the property of dust particles at different initial velocity (v0) and initial mass per charge (m/q) ratio were analysed. The study especially identifies regions in the phase space where the electric field within a non-quasineutral plasma region above the surface of the object, the Debye layer, becomes important compared with the gravitational force. Properties of the dust particles in the phase space region where the electric field plays an important role are studied by a 3D Monte Carlo model. The current DPEM modelling suite does not include models of how dust particles are initially injected from the surface. Therefore, the presented phase space study cannot give absolute 3D dust density distributions around the analysed airless objects. For that, an additional emission model is necessary, which determines how many dust particles are emitted at various places on the analysed (v0, m/q)-phase space. However, this study identifies phase space regions where the electric field within the Debye layer plays an important role for dust particles. Overall, the initial results indicate that when a realistic dust emission model is available, the unified lunar based DPEM modelling suite is a powerful tool to study globally and locally the dust environments of airless bodies such as planetary moons, Mercury, asteroids and non-active comets far from the Sun.
Investigation into the mechanisms of closed three-body abrasive wear
NASA Astrophysics Data System (ADS)
Dwyer-Joyce, R. S.; Sayles, R. S.; Ioannides, E.
1994-06-01
Contacting components frequently fail by abrasion caused by solid contaminants in the lubricant. This process can be classified as a closed three-body abrasive wear process. The mechanisms by which trapped particles cause material removal are not fully understood. This paper describes tests using model elastohydrodynamic contacts to study these mechanisms. An optical elastohydrodynamic lubrication rig has been used to study the deformation and fracture of ductile and brittle lubricant-borne debris. A ball-on-disk machine was used to study the behavior of the particles in partially sliding contacts. Small diamond particles were used as abrasives since these were thought not to break down in the contact; wear could then be directly related to particles of a known size. The particles were found to embed in the softer surface and to scratch the harder. The mass of material worn from the ball surface was approximately proportional to the particle sliding distance and abrasive concentration. Small particles tumbled through the contact, while larger particles ploughed. Mass loss was found to increase with abrasive particle size. Individual abrasion scratches have been measured and related to the abrading particle. A simple model of the abrasive process has been developed and compared with experimental data. The discrepancies are thought to be the result of the uncertainty about the entrainment of particles into the contact.
Kahnert, Michael; Nousiainen, Timo; Lindqvist, Hannakaisa; Ebert, Martin
2012-04-23
Light scattering by light absorbing carbon (LAC) aggregates encapsulated into sulfate shells is computed by use of the discrete dipole method. Computations are performed for a UV, visible, and IR wavelength, different particle sizes, and volume fractions. Reference computations are compared to three classes of simplified model particles that have been proposed for climate modeling purposes. Neither model matches the reference results sufficiently well. Remarkably, more realistic core-shell geometries fall behind homogeneous mixture models. An extended model based on a core-shell-shell geometry is proposed and tested. Good agreement is found for total optical cross sections and the asymmetry parameter. © 2012 Optical Society of America
Kalman and particle filtering methods for full vehicle and tyre identification
NASA Astrophysics Data System (ADS)
Bogdanski, Karol; Best, Matthew C.
2018-05-01
This paper considers identification of all significant vehicle handling dynamics of a test vehicle, including identification of a combined-slip tyre model, using only those sensors currently available on most vehicle controller area network buses. Using an appropriately simple but efficient model structure, all of the independent parameters are found from test vehicle data, with the resulting model accuracy demonstrated on independent validation data. The paper extends previous work on augmented Kalman Filter state estimators to concentrate wholly on parameter identification. It also serves as a review of three alternative filtering methods; identifying forms of the unscented Kalman filter, extended Kalman filter and particle filter are proposed and compared for effectiveness, complexity and computational efficiency. All three filters are suited to applications of system identification and the Kalman Filters can also operate in real-time in on-line model predictive controllers or estimators.
PARTICLE SCATTERING OFF OF RIGHT-HANDED DISPERSIVE WAVES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreiner, C.; Kilian, P.; Spanier, F., E-mail: cschreiner@astro.uni-wuerzburg.de
Resonant scattering of fast particles off low frequency plasma waves is a major process determining transport characteristics of energetic particles in the heliosphere and contributing to their acceleration. Usually, only Alfvén waves are considered for this process, although dispersive waves are also present throughout the heliosphere. We investigate resonant interaction of energetic electrons with dispersive, right-handed waves. For the interaction of particles and a single wave a variable transformation into the rest frame of the wave can be performed. Here, well-established analytic models derived in the framework of magnetostatic quasi-linear theory can be used as a reference to validate simulationmore » results. However, this approach fails as soon as several dispersive waves are involved. Based on analytic solutions modeling the scattering amplitude in the magnetostatic limit, we present an approach to modify these equations for use in the plasma frame. Thereby we aim at a description of particle scattering in the presence of several waves. A particle-in-cell code is employed to study wave–particle scattering on a micro-physically correct level and to test the modified model equations. We investigate the interactions of electrons at different energies (from 1 keV to 1 MeV) and right-handed waves with various amplitudes. Differences between model and simulation arise in the case of high amplitudes or several waves. Analyzing the trajectories of single particles we find no microscopic diffusion in the case of a single plasma wave, although a broadening of the particle distribution can be observed.« less
Simulating Coupling Complexity in Space Plasmas: First Results from a new code
NASA Astrophysics Data System (ADS)
Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.
2005-12-01
The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal mass ejection and interplanetary shock propagation model for the inner and outer heliosphere, including, at a test-particle level, wave-particle interactions and particle acceleration at traveling shock waves and compression regions. 3) To develop an advanced Geospace General Circulation Model (GGCM) capable of realistically modeling space weather events, in particular the interaction with CMEs and geomagnetic storms. Furthermore, by implementing scalable run-time supports and sophisticated off- and on-line prediction algorithms, we anticipate important advances in the development of automatic and intelligent system software to optimize a wide variety of 'embedded' computations on parallel computers. Finally, public domain MHD and hydrodynamic codes had a transforming effect on space and astrophysics. We expect that our new generation, open source, public domain multi-scale code will have a similar transformational effect in a variety of disciplines, opening up new classes of problems to physicists and engineers alike.
Gao, Shuang; Kim, Jinyong; Yermakov, Michael; Elmashae, Yousef; He, Xinjian; Reponen, Tiina; Grinshpun, Sergey A
2015-01-01
Filtering facepiece respirators (FFRs) are commonly worn by first responders, first receivers, and other exposed groups to protect against exposure to airborne particles, including those originated by combustion. Most of these FFRs are NIOSH-certified (e.g., N95-type) based on the performance testing of their filters against charge-equilibrated aerosol challenges, e.g., NaCl. However, it has not been examined if the filtration data obtained with the NaCl-challenged FFR filters adequately represent the protection against real aerosol hazards such as combustion particles. A filter sample of N95 FFR mounted on a specially designed holder was challenged with NaCl particles and three combustion aerosols generated in a test chamber by burning wood, paper, and plastic. The concentrations upstream (Cup) and downstream (Cdown) of the filter were measured with a TSI P-Trak condensation particle counter and a Grimm Nanocheck particle spectrometer. Penetration was determined as (Cdown/Cup) ×100%. Four test conditions were chosen to represent inhalation flows of 15, 30, 55, and 85 L/min. Results showed that the penetration values of combustion particles were significantly higher than those of the "model" NaCl particles (p < 0.05), raising a concern about applicability of the N95 filters performance obtained with the NaCl aerosol challenge to protection against combustion particles. Aerosol type, inhalation flow rate and particle size were significant (p < 0.05) factors affecting the performance of the N95 FFR filter. In contrast to N95 filters, the penetration of combustion particles through R95 and P95 FFR filters (were tested in addition to N95) were not significantly higher than that obtained with NaCl particles. The findings were attributed to several effects, including the degradation of an N95 filter due to hydrophobic organic components generated into the air by combustion. Their interaction with fibers is anticipated to be similar to those involving "oily" particles. The findings of this study suggest that the efficiency of N95 respirator filters obtained with the NaCl aerosol challenge may not accurately predict (and rather overestimate) the filter efficiency against combustion particles.
Micrometeoroid Impacts and Optical Scatter in Space Environment
NASA Technical Reports Server (NTRS)
Heaney, James B.; Wang, Liqin L.; He, Charles C.
2010-01-01
This paper discusses the results of an attempt to use laboratory test data and empirically derived models to quantify the degree of surface damage and associated light scattering that might be expected from hypervelocity particle impacts in space environment. Published descriptions of the interplanetary dust environment were used as the sources of particle mass, size, and velocity estimates. Micrometeoroid sizes are predicted to be predominantly in the mass range 10(exp -5) g or less, with most having diameters near 1 micrometer, but some larger than I20 micrometers, with velocities near 20 kilometers per second. In a laboratory test, latex ( p = 1.1. grams per cubic centimeter) and iron (7.9 grams per cubic centimeter) particles with diameters ranging from 0.75 micrometers to 1.60 micrometers and with velocities ranging from 2.0 kilometers per second to 18.5 kilometers per second, were shot at a Be substrate mirror that had a dielectric coated gold reflecting surface. Scanning electron and atomic force microscopy were used to measure crater dimensions that were then associated with particle impact energies. These data were then fitted to empirical models derived from solar cell and other spacecraft surface components returned from orbit, as well as studies of impact craters studied on glassy materials returned from the lunar surface, to establish a link between particle energy and impact crater dimension. From these data, an estimate of total expected damaged area was computed and this result produced an estimate of expected surface scatter from the modeled environment.
Newman, Roger H; Hill, Stefan J; Harris, Philip J
2013-12-01
A synchrotron wide-angle x-ray scattering study of mung bean (Vigna radiata) primary cell walls was combined with published solid-state nuclear magnetic resonance data to test models for packing of (1→4)-β-glucan chains in cellulose microfibrils. Computer-simulated peak shapes, calculated for 36-chain microfibrils with perfect order or uncorrelated disorder, were sharper than those in the experimental diffractogram. Introducing correlated disorder into the models broaden the simulated peaks but only when the disorder was increased to unrealistic magnitudes. Computer-simulated diffractograms, calculated for 24- and 18-chain models, showed good fits to experimental data. Particularly good fits to both x-ray and nuclear magnetic resonance data were obtained for collections of 18-chain models with mixed cross-sectional shapes and occasional twinning. Synthesis of 18-chain microfibrils is consistent with a model for cellulose-synthesizing complexes in which three cellulose synthase polypeptides form a particle and six particles form a rosette.
A graphics-card implementation of Monte-Carlo simulations for cosmic-ray transport
NASA Astrophysics Data System (ADS)
Tautz, R. C.
2016-05-01
A graphics card implementation of a test-particle simulation code is presented that is based on the CUDA extension of the C/C++ programming language. The original CPU version has been developed for the calculation of cosmic-ray diffusion coefficients in artificial Kolmogorov-type turbulence. In the new implementation, the magnetic turbulence generation, which is the most time-consuming part, is separated from the particle transport and is performed on a graphics card. In this article, the modification of the basic approach of integrating test particle trajectories to employ the SIMD (single instruction, multiple data) model is presented and verified. The efficiency of the new code is tested and several language-specific accelerating factors are discussed. For the example of isotropic magnetostatic turbulence, sample results are shown and a comparison to the results of the CPU implementation is performed.
Shaping the micromechanical behavior of multi-phase composites for bone tissue engineering.
Ranganathan, Shivakumar I; Yoon, Diana M; Henslee, Allan M; Nair, Manitha B; Smid, Christine; Kasper, F Kurtis; Tasciotti, Ennio; Mikos, Antonios G; Decuzzi, Paolo; Ferrari, Mauro
2010-09-01
Mechanical stiffness is a fundamental parameter in the rational design of composites for bone tissue engineering in that it affects both the mechanical stability and the osteo-regeneration process at the fracture site. A mathematical model is presented for predicting the effective Young's modulus (E) and shear modulus (G) of a multi-phase biocomposite as a function of the geometry, material properties and volume concentration of each individual phase. It is demonstrated that the shape of the reinforcing particles may dramatically affect the mechanical stiffness: E and G can be maximized by employing particles with large geometrical anisotropy, such as thin platelet-like or long fibrillar-like particles. For a porous poly(propylene fumarate) (60% porosity) scaffold reinforced with silicon particles (10% volume concentration) the Young's (shear) modulus could be increased by more than 10 times by just using thin platelet-like as opposed to classical spherical particles, achieving an effective modulus E approximately 8 GPa (G approximately 3.5 GPa). The mathematical model proposed provides results in good agreement with several experimental test cases and could help in identifying the proper formulation of bone scaffolds, reducing the development time and guiding the experimental testing. 2010 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Johnson, Paul E.; Smith, Milton O.; Adams, John B.
1992-01-01
Algorithms were developed, based on Hapke's (1981) equations, for remote determinations of mineral abundances and particle sizes from reflectance spectra. In this method, spectra are modeled as a function of end-member abundances and illumination/viewing geometry. The method was tested on a laboratory data set. It is emphasized that, although there exist more sophisticated models, the present algorithms are particularly suited for remotely sensed data, where little opportunity exists to independently measure reflectance versus article size and phase function.
A deformable particle-in-cell method for advective transport in geodynamic modeling
NASA Astrophysics Data System (ADS)
Samuel, Henri
2018-06-01
This paper presents an improvement of the particle-in-cell method commonly used in geodynamic modeling for solving pure advection of sharply varying fields. Standard particle-in-cell approaches use particle kernels to transfer the information carried by the Lagrangian particles to/from the Eulerian grid. These kernels are generally one-dimensional and non-evolutive, which leads to the development of under- and over-sampling of the spatial domain by the particles. This reduces the accuracy of the solution, and may require the use of a prohibitive amount of particles in order to maintain the solution accuracy to an acceptable level. The new proposed approach relies on the use of deformable kernels that account for the strain history in the vicinity of particles. It results in a significant improvement of the spatial sampling by the particles, leading to a much higher accuracy of the numerical solution, for a reasonable computational extra cost. Various 2D tests were conducted to compare the performances of the deformable particle-in-cell method with the particle-in-cell approach. These consistently show that at comparable accuracy, the deformable particle-in-cell method was found to be four to six times more efficient than standard particle-in-cell approaches. The method could be adapted to 3D space and generalized to cases including motionless transport.
Preparation for Scaling Studies of Ice-Crystal Icing at the NRC Research Altitude Test Facility
NASA Technical Reports Server (NTRS)
Struk, Peter M.; Bencic, Timothy J.; Tsao, Jen-Ching; Fuleki, Dan; Knezevici, Daniel C.
2013-01-01
This paper describes experiments conducted at the National Research Council (NRC) of Canadas Research Altitiude Test Facility between March 26 and April 11, 2012. The tests, conducted collaboratively between NASA and NRC, focus on three key aspects in preparation for later scaling work to be conducted with a NACA 0012 airfoil model in the NRC Cascade rig: (1) cloud characterization, (2) scaling model development, and (3) ice-shape profile measurements. Regarding cloud characterization, the experiments focus on particle spectra measurements using two shadowgraphy methods, cloud uniformity via particle scattering from a laser sheet, and characterization of the SEA Multi-Element probe. Overviews of each aspect as well as detailed information on the diagnostic method are presented. Select results from the measurements and interpretation are presented which will help guide future work.
Validation of community models: 3. Tracing field lines in heliospheric models
NASA Astrophysics Data System (ADS)
MacNeice, Peter; Elliott, Brian; Acebal, Ariel
2011-10-01
Forecasting hazardous gradual solar energetic particle (SEP) bursts at Earth requires accurately modeling field line connections between Earth and the locations of coronal or interplanetary shocks that accelerate the particles. We test the accuracy of field lines reconstructed using four different models of the ambient coronal and inner heliospheric magnetic field, through which these shocks must propagate, including the coupled Wang-Sheeley-Arge (WSA)/ENLIL model. Evaluating the WSA/ENLIL model performance is important since it is the most sophisticated model currently available to space weather forecasters which can model interplanetary coronal mass ejections and, when coupled with particle acceleration and transport models, will provide a complete model for gradual SEP bursts. Previous studies using a simpler Archimedean spiral approach above 2.5 solar radii have reported poor performance. We test the accuracy of the model field lines connecting Earth to the Sun at the onset times of 15 impulsive SEP bursts, comparing the foot points of these field lines with the locations of surface events believed to be responsible for the SEP bursts. We find the WSA/ENLIL model performance is no better than the simplest spiral model, and the principal source of error is the model's inability to reproduce sufficient low-latitude open flux. This may be due to the model's use of static synoptic magnetograms, which fail to account for transient activity in the low corona, during which reconnection events believed to initiate the SEP acceleration may contribute short-lived open flux at low latitudes. Time-dependent coronal models incorporating these transient events may be needed to significantly improve Earth/Sun field line forecasting.
Particle size distribution: A key factor in estimating powder dustiness.
López Lilao, Ana; Sanfélix Forner, Vicenta; Mallol Gasch, Gustavo; Monfort Gimeno, Eliseo
2017-12-01
A wide variety of raw materials, involving more than 20 samples of quartzes, feldspars, nephelines, carbonates, dolomites, sands, zircons, and alumina, were selected and characterised. Dustiness, i.e., a materials' tendency to generate dust on handling, was determined using the continuous drop method. These raw materials were selected to encompass a wide range of particle sizes (1.6-294 µm) and true densities (2650-4680 kg/m 3 ). The dustiness of the raw materials, i.e., their tendency to generate dust on handling, was determined using the continuous drop method. The influence of some key material parameters (particle size distribution, flowability, and specific surface area) on dustiness was assessed. In this regard, dustiness was found to be significantly affected by particle size distribution. Data analysis enabled development of a model for predicting the dustiness of the studied materials, assuming that dustiness depended on the particle fraction susceptible to emission and on the bulk material's susceptibility to release these particles. On the one hand, the developed model allows the dustiness mechanisms to be better understood. In this regard, it may be noted that relative emission increased with mean particle size. However, this did not necessarily imply that dustiness did, because dustiness also depended on the fraction of particles susceptible to be emitted. On the other hand, the developed model enables dustiness to be estimated using just the particle size distribution data. The quality of the fits was quite good and the fact that only particle size distribution data are needed facilitates industrial application, since these data are usually known by raw materials managers, thus making additional tests unnecessary. This model may therefore be deemed a key tool in drawing up efficient preventive and/or corrective measures to reduce dust emissions during bulk powder processing, both inside and outside industrial facilities. It is recommended, however, to use the developed model only if particle size, true density, moisture content, and shape lie within the studied ranges.
Li, Mingzhong; Xue, Jianquan; Li, Yanchao; Tang, Shukai
2014-01-01
Considering the influence of particle shape and the rheological properties of fluid, two artificial intelligence methods (Artificial Neural Network and Support Vector Machine) were used to predict the wall factor which is widely introduced to deduce the net hydrodynamic drag force of confining boundaries on settling particles. 513 data points were culled from the experimental data of previous studies, which were divided into training set and test set. Particles with various shapes were divided into three kinds: sphere, cylinder, and rectangular prism; feature parameters of each kind of particle were extracted; prediction models of sphere and cylinder using artificial neural network were established. Due to the little number of rectangular prism sample, support vector machine was used to predict the wall factor, which is more suitable for addressing the problem of small samples. The characteristic dimension was presented to describe the shape and size of the diverse particles and a comprehensive prediction model of particles with arbitrary shapes was established to cover all types of conditions. Comparisons were conducted between the predicted values and the experimental results. PMID:24772024
Li, Mingzhong; Zhang, Guodong; Xue, Jianquan; Li, Yanchao; Tang, Shukai
2014-01-01
Considering the influence of particle shape and the rheological properties of fluid, two artificial intelligence methods (Artificial Neural Network and Support Vector Machine) were used to predict the wall factor which is widely introduced to deduce the net hydrodynamic drag force of confining boundaries on settling particles. 513 data points were culled from the experimental data of previous studies, which were divided into training set and test set. Particles with various shapes were divided into three kinds: sphere, cylinder, and rectangular prism; feature parameters of each kind of particle were extracted; prediction models of sphere and cylinder using artificial neural network were established. Due to the little number of rectangular prism sample, support vector machine was used to predict the wall factor, which is more suitable for addressing the problem of small samples. The characteristic dimension was presented to describe the shape and size of the diverse particles and a comprehensive prediction model of particles with arbitrary shapes was established to cover all types of conditions. Comparisons were conducted between the predicted values and the experimental results.
Effects of moisture content on wind erosion thresholds of biochar
NASA Astrophysics Data System (ADS)
Silva, F. C.; Borrego, C.; Keizer, J. J.; Amorim, J. H.; Verheijen, F. G. A.
2015-12-01
Biochar, i.e. pyrolysed biomass, as a soil conditioner is gaining increasing attention in research and industry, with guidelines and certifications being developed for biochar production, storage and handling, as well as for application to soils. Adding water to biochar aims to reduce its susceptibility to become air-borne during and after the application to soils, thereby preventing, amongst others, human health issues from inhalation. The Bagnold model has previously been modified to explain the threshold friction velocity of coal particles at different moisture contents, by adding an adhesive effect. However, it is unknown if this model also works for biochar particles. We measured the threshold friction velocities of a range of biochar particles (woody feedstock) under a range of moisture contents by using a wind tunnel, and tested the performance of the modified Bagnold model. Results showed that the threshold friction velocity can be significantly increased by keeping the gravimetric moisture content at or above 15% to promote adhesive effects between the small particles. For the specific biochar of this study, the modified Bagnold model accurately estimated threshold friction velocities of biochar particles up to moisture contents of 10%.
NASA Astrophysics Data System (ADS)
Chen, Chaochao; Vachtsevanos, George; Orchard, Marcos E.
2012-04-01
Machine prognosis can be considered as the generation of long-term predictions that describe the evolution in time of a fault indicator, with the purpose of estimating the remaining useful life (RUL) of a failing component/subsystem so that timely maintenance can be performed to avoid catastrophic failures. This paper proposes an integrated RUL prediction method using adaptive neuro-fuzzy inference systems (ANFIS) and high-order particle filtering, which forecasts the time evolution of the fault indicator and estimates the probability density function (pdf) of RUL. The ANFIS is trained and integrated in a high-order particle filter as a model describing the fault progression. The high-order particle filter is used to estimate the current state and carry out p-step-ahead predictions via a set of particles. These predictions are used to estimate the RUL pdf. The performance of the proposed method is evaluated via the real-world data from a seeded fault test for a UH-60 helicopter planetary gear plate. The results demonstrate that it outperforms both the conventional ANFIS predictor and the particle-filter-based predictor where the fault growth model is a first-order model that is trained via the ANFIS.
Simulation of deterministic energy-balance particle agglomeration in turbulent liquid-solid flows
NASA Astrophysics Data System (ADS)
Njobuenwu, Derrick O.; Fairweather, Michael
2017-08-01
An efficient technique to simulate turbulent particle-laden flow at high mass loadings within the four-way coupled simulation regime is presented. The technique implements large-eddy simulation, discrete particle simulation, a deterministic treatment of inter-particle collisions, and an energy-balanced particle agglomeration model. The algorithm to detect inter-particle collisions is such that the computational costs scale linearly with the number of particles present in the computational domain. On detection of a collision, particle agglomeration is tested based on the pre-collision kinetic energy, restitution coefficient, and van der Waals' interactions. The performance of the technique developed is tested by performing parametric studies on the influence of the restitution coefficient (en = 0.2, 0.4, 0.6, and 0.8), particle size (dp = 60, 120, 200, and 316 μm), Reynolds number (Reτ = 150, 300, and 590), and particle concentration (αp = 5.0 × 10-4, 1.0 × 10-3, and 5.0 × 10-3) on particle-particle interaction events (collision and agglomeration). The results demonstrate that the collision frequency shows a linear dependency on the restitution coefficient, while the agglomeration rate shows an inverse dependence. Collisions among smaller particles are more frequent and efficient in forming agglomerates than those of coarser particles. The particle-particle interaction events show a strong dependency on the shear Reynolds number Reτ, while increasing the particle concentration effectively enhances particle collision and agglomeration whilst having only a minor influence on the agglomeration rate. Overall, the sensitivity of the particle-particle interaction events to the selected simulation parameters is found to influence the population and distribution of the primary particles and agglomerates formed.
How comparable are size-resolved particle number concentrations from different instruments?
NASA Astrophysics Data System (ADS)
Hornsby, K. E.; Pryor, S. C.
2012-12-01
The need for comparability of particle size resolved measurements originates from multiple drivers including: (i) Recent suggestions that air quality standards for particulate matter should migrate from being mass-based to incorporating number concentrations. This move would necessarily be predicated on measurement comparability which is absolutely critical to compliance determination. (ii) The need to quantify and diagnose causes of variability in nucleation and growth rates in nano-particle experiments conducted in different locations. (iii) Epidemiological research designed to identify key parameters in human health responses to fine particle exposure. Here we present results from a detailed controlled laboratory instrument inter-comparison experiment designed to investigate data comparability in the size range of 2.01-523.3 nm across a range of particle composition, modal diameter and absolute concentration. Particle size distributions were generated using a TSI model 3940 Aerosol Generation System (AGS) diluted using zero air, and sampled using four TSI Scanning Mobility Particle Spectrometer (SMPS) configurations and a TSI model 3091 Fast Mobility Particle Sizer (FMPS). The SMPS configurations used two Electrostatic Classifiers (EC) (model 3080) attached to either a Long DMA (LDMA) (model 3081) or a Nano DMA (NDMA) (model 3085) plumbed to either a TSI model 3025A Butanol Condensed Particle Counting (CPC) or a TSI model 3788 Water CPC. All four systems were run using both high and low flow conditions, and were operated with both the internal diffusion loss and multiple charge corrections turned on. The particle compositions tested were sodium chloride, ammonium nitrate and olive oil diluted in ethanol. Particles of all three were generated at three peak concentration levels (spanning the range observed at our experimental site), and three modal particle diameters. Experimental conditions were maintained for a period of 20 minutes to ensure experimental stationarity and in the data analysis only the middle 18 minutes of data are analyzed. Because of variations in the discretization of the different instrumental configurations, the data are analyzed both after being transformed onto a common size resolution and in terms of a fitted modal distribution. Diagnostic analysis are conducted to assess the impact of SMPS configuration on total number concentration, model geometric mean diameter and distribution dispersion. Preliminary results indicate that selection of DMA exerts the larger control over instrument response.
Mapping fracture flow paths with a nanoscale zero-valent iron tracer test and a flowmeter test
NASA Astrophysics Data System (ADS)
Chuang, Po-Yu; Chia, Yeeping; Chiu, Yung-Chia; Teng, Mao-Hua; Liou, Sofia Ya Hsuan
2018-02-01
The detection of preferential flow paths and the characterization of their hydraulic properties are important for the development of hydrogeological conceptual models in fractured-rock aquifers. In this study, nanoscale zero-valent iron (nZVI) particles were used as tracers to characterize fracture connectivity between two boreholes in fractured rock. A magnet array was installed vertically in the observation well to attract arriving nZVI particles and identify the location of the incoming tracer. Heat-pulse flowmeter tests were conducted to delineate the permeable fractures in the two wells for the design of the tracer test. The nZVI slurry was released in the screened injection well. The arrival of the slurry in the observation well was detected by an increase in electrical conductivity, while the depth of the connected fracture was identified by the distribution of nZVI particles attracted to the magnet array. The position where the maximum weight of attracted nZVI particles was observed coincides with the depth of a permeable fracture zone delineated by the heat-pulse flowmeter. In addition, a saline tracer test produced comparable results with the nZVI tracer test. Numerical simulation was performed using MODFLOW with MT3DMS to estimate the hydraulic properties of the connected fracture zones between the two wells. The study results indicate that the nZVI particle could be a promising tracer for the characterization of flow paths in fractured rock.
On the Early In Situ Formation of Pluto’s Small Satellites
NASA Astrophysics Data System (ADS)
Woo, Jason Man Yin; Lee, Man Hoi
2018-04-01
The formation of Pluto’s small satellites—Styx, Nix, Keberos, and Hydra—remains a mystery. Their orbits are nearly circular and are near mean-motion resonances and nearly coplanar with Charon’s orbit. One scenario suggests that they all formed close to their current locations from a disk of debris that was ejected from the Charon-forming impact before the tidal evolution of Charon. The validity of this scenario is tested by performing N-body simulations with the small satellites treated as test particles and Pluto–Charon evolving tidally from an initial orbit at a few Pluto radii with initial eccentricity e C = 0 or 0.2. After tidal evolution, the free eccentricities e free of the test particles are extracted by applying fast Fourier transformation to the distance between the test particles and the center of mass of the system and compared with the current eccentricities of the four small satellites. The only surviving test particles with e free matching the eccentricities of the current satellites are those not affected by mean-motion resonances during the tidal evolution in a model with Pluto’s effective tidal dissipation function Q = 100 and an initial e C = 0.2 that is damped down rapidly. However, these test particles do not have any preference to be in or near 4:1, 5:1, and 6:1 resonances with Charon. An alternative scenario may be needed to explain the formation of Pluto’s small satellites.
Radiation Transport in Random Media With Large Fluctuations
NASA Astrophysics Data System (ADS)
Olson, Aaron; Prinja, Anil; Franke, Brian
2017-09-01
Neutral particle transport in media exhibiting large and complex material property spatial variation is modeled by representing cross sections as lognormal random functions of space and generated through a nonlinear memory-less transformation of a Gaussian process with covariance uniquely determined by the covariance of the cross section. A Karhunen-Loève decomposition of the Gaussian process is implemented to effciently generate realizations of the random cross sections and Woodcock Monte Carlo used to transport particles on each realization and generate benchmark solutions for the mean and variance of the particle flux as well as probability densities of the particle reflectance and transmittance. A computationally effcient stochastic collocation method is implemented to directly compute the statistical moments such as the mean and variance, while a polynomial chaos expansion in conjunction with stochastic collocation provides a convenient surrogate model that also produces probability densities of output quantities of interest. Extensive numerical testing demonstrates that use of stochastic reduced-order modeling provides an accurate and cost-effective alternative to random sampling for particle transport in random media.
TRANSGENIC MOUSE MODELS AND PARTICULATE MATTER (PM)
The hypothesis to be tested is that metal catalyzed oxidative stress can contribute to the biological effects of particulate matter. We acquired several transgenic mouse strains to test this hypothesis. Breeding of the mice was accomplished by Duke University. Particles employed ...
Development of SSUBPIC code for modeling the neutral gas depletion effect in helicon discharges
NASA Astrophysics Data System (ADS)
Kollasch, Jeffrey; Sovenic, Carl; Schmitz, Oliver
2017-10-01
The SSUBPIC (steady-state unstructured-boundary particle-in-cell) code is being developed to model helicon plasma devices. The envisioned modeling framework incorporates (1) a kinetic neutral particle model, (2) a kinetic ion model, (3) a fluid electron model, and (4) an RF power deposition model. The models are loosely coupled and iterated until convergence to steady-state. Of the four required solvers, the kinetic ion and neutral particle simulation can now be done within the SSUBPIC code. Recent SSUBPIC modifications include implementation and testing of a Coulomb collision model (Lemons et al., JCP, 228(5), pp. 1391-1403) allowing efficient coupling of kineticly-treated ions to fluid electrons, and implementation of a neutral particle tracking mode with charge-exchange and electron impact ionization physics. These new simulation capabilities are demonstrated working independently and coupled to ``dummy'' profiles for RF power deposition to converge on steady-state plasma and neutral profiles. The geometry and conditions considered are similar to those of the MARIA experiment at UW-Madison. Initial results qualitatively show the expected neutral gas depletion effect in which neutrals in the plasma core are not replenished at a sufficient rate to sustain a higher plasma density. This work is funded by the NSF CAREER award PHY-1455210 and NSF Grant PHY-1206421.
Paunov, Vesselin N; Al-Shehri, Hamza; Horozov, Tommy S
2016-09-29
We developed and tested a theoretical model for the attachment of fluid-infused porous supra-particles to a fluid-liquid interface. We considered the wetting behaviour of agglomerated clusters of particles, typical of powdered materials dispersed in a liquid, as well as of the adsorption of liquid-infused colloidosomes at the liquid-fluid interface. The free energy of attachment of a composite spherical porous supra-particle made from much smaller aggregated spherical particles to the oil-water interface was calculated. Two cases were considered: (i) a water-filled porous supra-particle adsorbed at the oil-water interface from the water phase, and, (ii) an oil-filled porous supra-particle adsorbed at the oil-water interface from the oil-phase. We derived equations relating the three-phase contact angle of the smaller "building block" particles and the contact angle of the liquid-infused porous supra-particles. The theory predicts that the porous supra-particle contact angle attached at the liquid interface strongly depends on the type of fluid infused in the particle pores and the fluid phase from which it approaches the liquid interface. We tested the theory by using millimetre-sized porous supra-particles fabricated by evaporation of droplets of polystyrene latex suspension on a pre-heated super-hydrophobic surface, followed by thermal annealing at the glass transition temperature. Such porous particles were initially infused with water or oil and approached to the oil-water interface from the infusing phase. The experiment showed that when attaching at the hexadecane-water interface, the porous supra-particles behaved as hydrophilic when they were pre-filled with water and hydrophobic when they were pre-filled with hexadecane. The results agree with the theoretically predicted contact angles for the porous composite supra-particles based on the values of the contact angles of their building block latex particles measured with the Gel Trapping Technique. The experimental data for the attachment of porous supra particles to the air-water interface from both air and water also agree with the theoretical model. This study gives important insights about how porous particles and particle aggregates attach to the oil-water interface in Pickering emulsions and the air-water surface in particle-stabilised aqueous foams relevant in ore flotation and a range of cosmetic, pharmaceutical, food, home and personal care formulations.
Influence of coal slurry particle composition on pipeline hydraulic transportation behavior
NASA Astrophysics Data System (ADS)
Li-an, Zhao; Ronghuan, Cai; Tieli, Wang
2018-02-01
Acting as a new type of energy transportation mode, the coal pipeline hydraulic transmission can reduce the energy transportation cost and the fly ash pollution of the conventional coal transportation. In this study, the effect of average velocity, particle size and pumping time on particle composition of coal particles during hydraulic conveying was investigated by ring tube test. Meanwhile, the effects of particle composition change on slurry viscosity, transmission resistance and critical sedimentation velocity were studied based on the experimental data. The experimental and theoretical analysis indicate that the alter of slurry particle composition can lead to the change of viscosity, resistance and critical velocity of slurry. Moreover, based on the previous studies, the critical velocity calculation model of coal slurry is proposed.
Study on effect of microparticle's size on cavitation erosion in solid-liquid system
NASA Astrophysics Data System (ADS)
Chen, Haosheng; Liu, Shihan; Wang, Jiadao; Chen, Darong
2007-05-01
Five different solutions containing microparticles in different sizes were tested in a vibration cavitation erosion experiment. After the experiment, the number of erosion pits on sample surfaces, free radicals HO• in solutions, and mass loss all show that the cavitation erosion strength is strongly related to the particle size, and 500nm particles cause more severe cavitation erosion than other smaller or larger particles do. A model is presented to explain such result considering both nucleation and bubble-particle collision effects. Particle of a proper size will increase the number of heterogeneous nucleation and at the same time reduce the number of bubble-particle combinations, which results in more free bubbles in the solution to generate stronger cavitation erosion.
NASA Technical Reports Server (NTRS)
Hughes, David; Dazzo, Tony
2007-01-01
This viewgraph presentation reviews the use of particle analysis to assist in preparing for the 4th Hubble Space Telescope (HST) Servicing mission. During this mission the Space Telescope Imaging Spectrograph (STIS) will be repaired. The particle analysis consisted of Finite element mesh creation, Black-body viewfactors generated using I-DEAS TMG Thermal Analysis, Grey-body viewfactors calculated using Markov method, Particle distribution modeled using an iterative Monte Carlo process, (time-consuming); in house software called MASTRAM, Differential analysis performed in Excel, and Visualization provided by Tecplot and I-DEAS. Several tests were performed and are reviewed: Conformal Coat Particle Study, Card Extraction Study, Cover Fastener Removal Particle Generation Study, and E-Graf Vibration Particulate Study. The lessons learned during this analysis are also reviewed.
3D magnetospheric parallel hybrid multi-grid method applied to planet–plasma interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leclercq, L., E-mail: ludivine.leclercq@latmos.ipsl.fr; Modolo, R., E-mail: ronan.modolo@latmos.ipsl.fr; Leblanc, F.
2016-03-15
We present a new method to exploit multiple refinement levels within a 3D parallel hybrid model, developed to study planet–plasma interactions. This model is based on the hybrid formalism: ions are kinetically treated whereas electrons are considered as a inertia-less fluid. Generally, ions are represented by numerical particles whose size equals the volume of the cells. Particles that leave a coarse grid subsequently entering a refined region are split into particles whose volume corresponds to the volume of the refined cells. The number of refined particles created from a coarse particle depends on the grid refinement rate. In order tomore » conserve velocity distribution functions and to avoid calculations of average velocities, particles are not coalesced. Moreover, to ensure the constancy of particles' shape function sizes, the hybrid method is adapted to allow refined particles to move within a coarse region. Another innovation of this approach is the method developed to compute grid moments at interfaces between two refinement levels. Indeed, the hybrid method is adapted to accurately account for the special grid structure at the interfaces, avoiding any overlapping grid considerations. Some fundamental test runs were performed to validate our approach (e.g. quiet plasma flow, Alfven wave propagation). Lastly, we also show a planetary application of the model, simulating the interaction between Jupiter's moon Ganymede and the Jovian plasma.« less
Evaluation of titanium carbide metal matrix composites deposited via laser cladding
NASA Astrophysics Data System (ADS)
Cavanaugh, Daniel Thomas
Metal matrix composites have been widely studied in terms of abrasion resistance, but a particular material system may behave differently as particle size, morphology, composition, and distribution of the hardening phase varies. The purpose of this thesis was to understand the mechanical and microstructural effects of combining titanium carbide with 431 series stainless steel to create a unique composite via laser cladding, particularly regarding wear properties. The most predominant effect in increasing abrasion resistance, measured via ASTM G65, was confirmed to be volume fraction of titanium carbide addition. Macrohardness was directly proportional to the amount of carbide, though there was an overall reduction in individual particle microhardness after cladding. The reduction in particle hardness was obscured by the effect of volume fraction carbide and did not substantially contribute to the wear resistance changes. A model evaluating effective mean free path of the titanium carbide particles was created and correlated to the measured data. The model proved successful in linking theoretical mean free path to overall abrasion resistance. The effects of the titanium carbide particle distributions were limited, while differences in particle size were noticeable. The mean free path model did not correlate well with the particle size, but it was shown that the fine carbides were completely removed by the coarse abrasive particles in the ASTM G65 test. The particle morphology showed indications of influencing the wear mode, but no statistical reduction was observed in the volume loss figures. Future studies may more specifically focus on particle morphology or compositional effects of the carbide particles.
Numerical Simulations of Near-Field Blast Effects using Kinetic Plates
NASA Astrophysics Data System (ADS)
Neuscamman, Stephanie; Manner, Virginia; Brown, Geoffrey; Glascoe, Lee
2013-06-01
Numerical simulations using two hydrocodes were compared to near-field measurements of blast impulse associated with ideal and non-ideal explosives to gain insight into testing results and predict untested configurations. The recently developed kinetic plate test was designed to measure blast impulse in the near-field by firing spherical charges in close range from steel plates and probing plate acceleration using laser velocimetry. Plate velocities for ideal, non-ideal and aluminized explosives tests were modeled using a three dimensional hydrocode. The effects of inert additives in the explosive formulation were modeled using a 1-D hydrocode with multiphase flow capability using Lagrangian particles. The relative effect of particle impact on the plate compared to the blast wave impulse is determined and modeling is compared to free field pressure results. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. This is abstract LLNL-ABS-622152.
The CHASE laboratory search for chameleon dark energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steffen, Jason H.; /Fermilab
2010-11-01
A scalar field is a favorite candidate for the particle responsible for dark energy. However, few theoretical means exist that can simultaneously explain the observed acceleration of the Universe and evade tests of gravity. The chameleon mechanism, whereby the properties of a particle depend upon the local environment, is one possible avenue. We present the results of the Chameleon Afterglow Search (CHASE) experiment, a laboratory probe for chameleon dark energy. CHASE marks a significant improvement other searches for chameleons both in terms of its sensitivity to the photon/chameleon coupling as well as its sensitivity to the classes of chameleon darkmore » energy models and standard power-law models. Since chameleon dark energy is virtually indistinguishable from a cosmological constant, CHASE tests dark energy models in a manner not accessible to astronomical surveys.« less
Plume Particle Collection and Sizing from Static Firing of Solid Rocket Motors
NASA Technical Reports Server (NTRS)
Sambamurthi, Jay K.
1995-01-01
Thermal radiation from the plume of any solid rocket motor, containing aluminum as one of the propellant ingredients, is mainly from the microscopic, hot aluminum oxide particles in the plume. The plume radiation to the base components of the flight vehicle is primarily determined by the plume flowfield properties, the size distribution of the plume particles, and their optical properties. The optimum design of a vehicle base thermal protection system is dependent on the ability to accurately predict this intense thermal radiation using validated theoretical models. This article describes a successful effort to collect reasonably clean plume particle samples from the static firing of the flight simulation motor (FSM-4) on March 10, 1994 at the T-24 test bed at the Thiokol space operations facility as well as three 18.3% scaled MNASA motors tested at NASA/MSFC. Prior attempts to collect plume particles from the full-scale motor firings have been unsuccessful due to the extremely hostile thermal and acoustic environment in the vicinity of the motor nozzle.
Exact relativistic models of conformastatic charged dust thick disks
NASA Astrophysics Data System (ADS)
García-Reyes, Gonzalo
2018-04-01
We construct relativistic models of charged dust thick disks for a particular conformastatic spacetime through a Miyamoto-Nagai transformation used in Newtonian gravity to model disk like galaxies. Two simple families of thick disk models and a family of thick annular disks based on the field of an extreme Reissner-Nordström black hole and a Morgan-Morgan-like metric are considered. The electrogeodesic motion of test particles around the structures are analyzed. Also the stability of the particles against radial perturbation is studied using an extension of the Rayleigh criteria of stability of a fluid in rest in a gravitational field. The models built satisfy all the energy conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gokaltun, Seckin; Munroe, Norman; Subramaniam, Shankar
2014-12-31
This study presents a new drag model, based on the cohesive inter-particle forces, implemented in the MFIX code. This new drag model combines an existing standard model in MFIX with a particle-based drag model based on a switching principle. Switches between the models in the computational domain occur where strong particle-to-particle cohesion potential is detected. Three versions of the new model were obtained by using one standard drag model in each version. Later, performance of each version was compared against available experimental data for a fluidized bed, published in the literature and used extensively by other researchers for validation purposes.more » In our analysis of the results, we first observed that standard models used in this research were incapable of producing closely matching results. Then, we showed for a simple case that a threshold is needed to be set on the solid volume fraction. This modification was applied to avoid non-physical results for the clustering predictions, when governing equation of the solid granular temperate was solved. Later, we used our hybrid technique and observed the capability of our approach in improving the numerical results significantly; however, improvement of the results depended on the threshold of the cohesive index, which was used in the switching procedure. Our results showed that small values of the threshold for the cohesive index could result in significant reduction of the computational error for all the versions of the proposed drag model. In addition, we redesigned an existing circulating fluidized bed (CFB) test facility in order to create validation cases for clustering regime of Geldart A type particles.« less
Hinkle, S.R.; Kauffman, L.J.; Thomas, M.A.; Brown, C.J.; McCarthy, K.A.; Eberts, S.M.; Rosen, Michael R.; Katz, B.G.
2009-01-01
Flow-model particle-tracking results and geochemical data from seven study areas across the United States were analyzed using three statistical methods to test the hypothesis that these variables can successfully be used to assess public supply well vulnerability to arsenic and uranium. Principal components analysis indicated that arsenic and uranium concentrations were associated with particle-tracking variables that simulate time of travel and water fluxes through aquifer systems and also through specific redox and pH zones within aquifers. Time-of-travel variables are important because many geochemical reactions are kinetically limited, and geochemical zonation can account for different modes of mobilization and fate. Spearman correlation analysis established statistical significance for correlations of arsenic and uranium concentrations with variables derived using the particle-tracking routines. Correlations between uranium concentrations and particle-tracking variables were generally strongest for variables computed for distinct redox zones. Classification tree analysis on arsenic concentrations yielded a quantitative categorical model using time-of-travel variables and solid-phase-arsenic concentrations. The classification tree model accuracy on the learning data subset was 70%, and on the testing data subset, 79%, demonstrating one application in which particle-tracking variables can be used predictively in a quantitative screening-level assessment of public supply well vulnerability. Ground-water management actions that are based on avoidance of young ground water, reflecting the premise that young ground water is more vulnerable to anthropogenic contaminants than is old ground water, may inadvertently lead to increased vulnerability to natural contaminants due to the tendency for concentrations of many natural contaminants to increase with increasing ground-water residence time.
Determination of Shed Ice Particle Size Using High Speed Digital Imaging
NASA Technical Reports Server (NTRS)
Broughton, Howard; Owens, Jay; Sims, James J.; Bond, Thomas H.
1996-01-01
A full scale model of an aircraft engine inlet was tested at NASA Lewis Research Center's Icing Research Tunnel. Simulated natural ice sheds from the engine inlet lip were studied using high speed digital image acquisition and image analysis. Strategic camera placement integrated at the model design phase allowed the study of ice accretion on the inlet lip and the resulting shed ice particles at the aerodynamic interface plane at the rear of the inlet prior to engine ingestion. The resulting digital images were analyzed using commercial and proprietary software to determine the size of the ice particles that could potentially be ingested by the engine during a natural shedding event. A methodology was developed to calibrate the imaging system and insure consistent and accurate measurements of the ice particles for a wide range of icing conditions.
Particles from wood smoke and road traffic differently affect the innate immune system of the lung.
Samuelsen, Mari; Cecilie Nygaard, Unni; Løvik, Martinus
2009-09-01
The effect of particles from road traffic and wood smoke on the innate immune response in the lung was studied in a lung challenge model with the intracellular bacterium Listeria monocytogenes. Female Balb/cA mice were instilled intratracheally with wood smoke particles, particles from road traffic collected during winter (studded tires used; St+), and during autumn (no studded tires; St-), or diesel exhaust particles (DEP). Simultaneously with, and 1 or 7 days after particle instillation, 10(5) bacteria were inoculated intratracheally. Bacterial numbers in the lungs and spleen 1 day after Listeria challenge were determined, as an indicator of cellular activation. In separate experiments, bronchoalveolar lavage (BAL) fluid was collected 4 h and 24 h after particle instillation. All particles tested reduced the numbers of bacteria in the lung 24 h after bacterial inoculation. When particles were given simultaneously with Listeria, the reduction was greatest for DEP, followed by St+ and St-, and least for wood smoke particles. Particle effects were no longer apparent after 7 days. Neutrophil numbers in BAL fluid were increased for all particle exposed groups. St+ and St- induced the highest levels of IL-1beta, MIP-2, MCP-1, and TNF-alpha, followed by DEP, which induced no TNF-alpha. In contrast, wood smoke particles only increased lactate dehydrogenase (LDH) activity, indicating a cytotoxic effect of these particles. In conclusion, all particles tested activated the innate immune system as determined with Listeria. However, differences in kinetics of anti-Listeria activity and levels of proinflammatory mediators point to cellular activation by different mechanisms.
Lee, Mong-Chuan; Lin, Yen-Hui; Yu, Huang-Wei
2014-11-01
A mathematical model system was derived to describe the kinetics of ammonium nitrification in a fixed biofilm reactor using dewatered sludge-fly ash composite ceramic particle as a supporting medium. The model incorporates diffusive mass transport and Monod kinetics. The model was solved using a combination of the orthogonal collocation method and Gear's method. A batch test was conducted to observe the nitrification of ammonium-nitrogen ([Formula: see text]-N) and the growth of nitrifying biomass. The compositions of nitrifying bacterial community in the batch kinetic test were analyzed using PCR-DGGE method. The experimental results show that the most staining intensity abundance of bands occurred on day 2.75 with the highest biomass concentration of 46.5 mg/L. Chemostat kinetic tests were performed independently to evaluate the biokinetic parameters used in the model prediction. In the column test, the removal efficiency of [Formula: see text]-N was approximately 96 % while the concentration of suspended nitrifying biomass was approximately 16 mg VSS/L and model-predicted biofilm thickness reached up to 0.21 cm in the steady state. The profiles of denaturing gradient gel electrophoresis (DGGE) of different microbial communities demonstrated that indigenous nitrifying bacteria (Nitrospira and Nitrobacter) existed and were the dominant species in the fixed biofilm process.
McNair, James N; Newbold, J Denis
2012-05-07
Most ecological studies of particle transport in streams that focus on fine particulate organic matter or benthic invertebrates use the Exponential Settling Model (ESM) to characterize the longitudinal pattern of particle settling on the bed. The ESM predicts that if particles are released into a stream, the proportion that have not yet settled will decline exponentially with transport time or distance and will be independent of the release elevation above the bed. To date, no credible basis in fluid mechanics has been established for this model, nor has it been rigorously tested against more-mechanistic alternative models. One alternative is the Local Exchange Model (LEM), which is a stochastic advection-diffusion model that includes both longitudinal and vertical spatial dimensions and is based on classical fluid mechanics. The LEM predicts that particle settling will be non-exponential in the near field but will become exponential in the far field, providing a new theoretical justification for far-field exponential settling that is based on plausible fluid mechanics. We review properties of the ESM and LEM and compare these with available empirical evidence. Most evidence supports the prediction of both models that settling will be exponential in the far field but contradicts the ESM's prediction that a single exponential distribution will hold for all transport times and distances. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Chen, Y. S.; Farmer, R. C.
1992-01-01
A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.
Li, Peizhi; Wang, Yong; Dong, Qingli
2017-04-01
Cities in China suffer from severe smog and haze, and a forecasting system with high accuracy is of great importance to foresee the concentrations of the airborne particles. Compared with chemical transport models, the growing artificial intelligence models can simulate nonlinearities and interactive relationships and getting more accurate results. In this paper, the Kolmogorov-Zurbenko (KZ) filter is modified and firstly applied to construct the model using an artificial intelligence method. The concentration of inhalable particles and fine particulate matter in Dalian are used to analyze the filtered components and test the forecasting accuracy. Besides, an extended experiment is made by implementing a comprehensive comparison and a stability test using data in three other cities in China. Results testify the excellent performance of the developed hybrid models, which can be utilized to better understand the temporal features of pollutants and to perform a better air pollution control and management. Copyright © 2017 Elsevier B.V. All rights reserved.
Disambiguating seesaw models using invariant mass variables at hadron colliders
NASA Astrophysics Data System (ADS)
Dev, P. S. Bhupal; Kim, Doojin; Mohapatra, Rabindra N.
2016-01-01
We propose ways to distinguish between different mechanisms behind the collider signals of TeV-scale seesaw models for neutrino masses using kinematic endpoints of invariant mass variables. We particularly focus on two classes of such models widely discussed in literature: (i) Standard Model extended by the addition of singlet neutrinos and (ii) Left-Right Symmetric Models. Relevant scenarios involving the same "smoking-gun" collider signature of dilepton plus dijet with no missing transverse energy differ from one another by their event topology, resulting in distinctive relationships among the kinematic endpoints to be used for discerning them at hadron colliders. These kinematic endpoints are readily translated to the mass parameters of the on-shell particles through simple analytic expressions which can be used for measuring the masses of the new particles. A Monte Carlo simulation with detector effects is conducted to test the viability of the proposed strategy in a realistic environment. Finally, we discuss the future prospects of testing these scenarios at the √{s}=14 and 100 TeV hadron colliders.
Approximate supernova remnant dynamics with cosmic ray production
NASA Technical Reports Server (NTRS)
Voelk, H. J.; Drury, L. O.; Dorfi, E. A.
1985-01-01
Supernova explosions are the most violent and energetic events in the galaxy and have long been considered probably sources of Cosmic Rays. Recent shock acceleration models treating the Cosmic Rays (CR's) as test particles nb a prescribed Supernova Remnant (SNR) evolution, indeed indicate an approximate power law momentum distribution f sub source (p) approximation p(-a) for the particles ultimately injected into the Interstellar Medium (ISM). This spectrum extends almost to the momentum p = 1 million GeV/c, where the break in the observed spectrum occurs. The calculated power law index approximately less than 4.2 agrees with that inferred for the galactic CR sources. The absolute CR intensity can however not be well determined in such a test particle approximation.
NASA Astrophysics Data System (ADS)
He, Yaoyao; Yang, Shanlin; Xu, Qifa
2013-07-01
In order to solve the model of short-term cascaded hydroelectric system scheduling, a novel chaotic particle swarm optimization (CPSO) algorithm using improved logistic map is introduced, which uses the water discharge as the decision variables combined with the death penalty function. According to the principle of maximum power generation, the proposed approach makes use of the ergodicity, symmetry and stochastic property of improved logistic chaotic map for enhancing the performance of particle swarm optimization (PSO) algorithm. The new hybrid method has been examined and tested on two test functions and a practical cascaded hydroelectric system. The experimental results show that the effectiveness and robustness of the proposed CPSO algorithm in comparison with other traditional algorithms.
Consistent kinetic simulation of plasma and sputtering in low temperature plasmas
NASA Astrophysics Data System (ADS)
Schmidt, Frederik; Trieschmann, Jan; Mussenbrock, Thomas
2016-09-01
Plasmas are commonly used in sputtering applications for the deposition of thin films. Although magnetron sources are a prominent choice, capacitively coupled plasmas have certain advantages (e.g., sputtering of non-conducting and/or ferromagnetic materials, aside of excellent control of the ion energy distribution). In order to understand the collective plasma and sputtering dynamics, a kinetic simulation model is helpful. Particle-in-Cell has been proven to be successful in simulating the plasma dynamics, while the Test-Multi-Particle-Method can be used to describe the sputtered neutral species. In this talk a consistent combination of these methods is presented by consistently coupling the simulated ion flux as input to a neutral particle transport model. The combined model is used to simulate and discuss the spatially dependent densities, fluxes and velocity distributions of all particles. This work is supported by the German Research Foundation (DFG) in the frame of Transregional Collaborative Research Center (SFB) TR-87.
Plasma Stopping Power Measurements Relevant to Inertial Confinement Fusion
NASA Astrophysics Data System (ADS)
McEvoy, Aaron; Herrmann, Hans; Kim, Yongho; Hoffman, Nelson; Schmitt, Mark; Rubery, Michael; Garbett, Warren; Horsfield, Colin; Gales, Steve; Zylstra, Alex; Gatu Johnson, Maria; Frenje, Johan; Petrasso, Richard; Marshall, Frederic; Batha, Steve
2015-11-01
Ignition in inertial confinement fusion (ICF) experiments may be achieved if the alpha particle energy deposition results in a thermonuclear burn wave induced in the dense DT fuel layer surrounding the hotspot. As such, understanding the physics of particle energy loss in a plasma is of critical importance to designing ICF experiments. Experiments have validated various stopping power models under select ne and Te conditions, however there remain unexplored regimes where models predict differing rates of energy deposition. An upcoming experiment at the Omega laser facility will explore charged particle stopping in CH plastic capsule ablators across a range of plasma conditions (ne between 1024 cm-3 and 1025 cm-3 and Te on the order of hundreds of eV). Plasma conditions will be measured using x-ray and gamma ray diagnostics, while plasma stopping power will be measured using charged particle energy loss measurements. Details on the experiment and the theoretical models to be tested will be presented.
Advances in the simulation and automated measurement of well-sorted granular material: 1. Simulation
Daniel Buscombe,; Rubin, David M.
2012-01-01
1. In this, the first of a pair of papers which address the simulation and automated measurement of well-sorted natural granular material, a method is presented for simulation of two-phase (solid, void) assemblages of discrete non-cohesive particles. The purpose is to have a flexible, yet computationally and theoretically simple, suite of tools with well constrained and well known statistical properties, in order to simulate realistic granular material as a discrete element model with realistic size and shape distributions, for a variety of purposes. The stochastic modeling framework is based on three-dimensional tessellations with variable degrees of order in particle-packing arrangement. Examples of sediments with a variety of particle size distributions and spatial variability in grain size are presented. The relationship between particle shape and porosity conforms to published data. The immediate application is testing new algorithms for automated measurements of particle properties (mean and standard deviation of particle sizes, and apparent porosity) from images of natural sediment, as detailed in the second of this pair of papers. The model could also prove useful for simulating specific depositional structures found in natural sediments, the result of physical alterations to packing and grain fabric, using discrete particle flow models. While the principal focus here is on naturally occurring sediment and sedimentary rock, the methods presented might also be useful for simulations of similar granular or cellular material encountered in engineering, industrial and life sciences.
NASA Technical Reports Server (NTRS)
Strutzenberg, Louise L.; Putman, Gabriel C.
2011-01-01
The Ares I Scale Model Acoustics Test (ASMAT) is a series of live-fire tests of scaled rocket motors meant to simulate the conditions of the Ares I launch configuration. These tests have provided a well documented set of high fidelity measurements useful for validation including data taken over a range of test conditions and containing phenomena like Ignition Over-Pressure and water suppression of acoustics. Building on dry simulations of the ASMAT tests with the vehicle at 5 ft. elevation (100 ft. real vehicle elevation), wet simulations of the ASMAT test setup have been performed using the Loci/CHEM computational fluid dynamics software to explore the effect of rainbird water suppression inclusion on the launch platform deck. Two-phase water simulation has been performed using an energy and mass coupled lagrangian particle system module where liquid phase emissions are segregated into clouds of virtual particles and gas phase mass transfer is accomplished through simple Weber number controlled breakup and boiling models. Comparisons have been performed to the dry 5 ft. elevation cases, using configurations with and without launch mounts. These cases have been used to explore the interaction between rainbird spray patterns and launch mount geometry and evaluate the acoustic sound pressure level knockdown achieved through above-deck rainbird deluge inclusion. This comparison has been anchored with validation from live-fire test data which showed a reduction in rainbird effectiveness with the presence of a launch mount.
NMR relaxation induced by iron oxide particles: testing theoretical models.
Gossuin, Y; Orlando, T; Basini, M; Henrard, D; Lascialfari, A; Mattea, C; Stapf, S; Vuong, Q L
2016-04-15
Superparamagnetic iron oxide particles find their main application as contrast agents for cellular and molecular magnetic resonance imaging. The contrast they bring is due to the shortening of the transverse relaxation time T 2 of water protons. In order to understand their influence on proton relaxation, different theoretical relaxation models have been developed, each of them presenting a certain validity domain, which depends on the particle characteristics and proton dynamics. The validation of these models is crucial since they allow for predicting the ideal particle characteristics for obtaining the best contrast but also because the fitting of T 1 experimental data by the theory constitutes an interesting tool for the characterization of the nanoparticles. In this work, T 2 of suspensions of iron oxide particles in different solvents and at different temperatures, corresponding to different proton diffusion properties, were measured and were compared to the three main theoretical models (the motional averaging regime, the static dephasing regime, and the partial refocusing model) with good qualitative agreement. However, a real quantitative agreement was not observed, probably because of the complexity of these nanoparticulate systems. The Roch theory, developed in the motional averaging regime (MAR), was also successfully used to fit T 1 nuclear magnetic relaxation dispersion (NMRD) profiles, even outside the MAR validity range, and provided a good estimate of the particle size. On the other hand, the simultaneous fitting of T 1 and T 2 NMRD profiles by the theory was impossible, and this occurrence constitutes a clear limitation of the Roch model. Finally, the theory was shown to satisfactorily fit the deuterium T 1 NMRD profile of superparamagnetic particle suspensions in heavy water.
Particle-induced osteolysis in three-dimensional micro-computed tomography.
Wedemeyer, Christian; Xu, Jie; Neuerburg, Carl; Landgraeber, Stefan; Malyar, Nasser M; von Knoch, Fabian; Gosheger, Georg; von Knoch, Marius; Löer, Franz; Saxler, Guido
2007-11-01
Small-animal models are useful for the in vivo study of particle-induced osteolysis, the most frequent cause of aseptic loosening after total joint replacement. Microstructural changes associated with particle-induced osteolysis have been extensively explored using two-dimensional (2D) techniques. However, relatively little is known regarding the 3D dynamic microstructure of particle-induced osteolysis. Therefore, we tested micro-computed tomography (micro-CT) as a novel tool for 3D analysis of wear debris-mediated osteolysis in a small-animal model of particle-induced osteolysis. The murine calvarial model based on polyethylene particles was utilized in 14 C57BL/J6 mice randomly divided into two groups. Group 1 received sham surgery, and group 2 was treated with polyethylene particles. We performed 3D micro-CT analysis and histological assessment. Various bone morphometric parameters were assessed. Regression was used to examine the relation between the results achieved by the two methods. Micro-CT analysis provides a fully automated means to quantify bone destruction in a mouse model of particle-induced osteolysis. This method revealed that the osteolytic lesions in calvaria in the experimental group were affected irregularly compared to the rather even distribution of osteolysis in the control group. This is an observation which would have been missed if histomorphometric analysis only had been performed, leading to false assessment of the actual situation. These irregularities seen by micro-CT analysis provide new insight into individual bone changes which might otherwise be overlooked by histological analysis and can be used as baseline information on which future studies can be designed.
NASA Astrophysics Data System (ADS)
Mansouri, Amir
The surface degradation of equipment due to consecutive impacts of abrasive particles carried by fluid flow is called solid particle erosion. Solid particle erosion occurs in many industries including oil and gas. In order to prevent abrupt failures and costly repairs, it is essential to predict the erosion rate and identify the locations of the equipment that are mostly at risk. Computational Fluid Dynamics (CFD) is a powerful tool for predicting the erosion rate. Erosion prediction using CFD analysis includes three steps: (1) obtaining flow solution, (2) particle tracking and calculating the particle impact speed and angle, and (3) relating the particle impact information to mass loss of material through an erosion equation. Erosion equations are commonly generated using dry impingement jet tests (sand-air), since the particle impact speed and angle are assumed not to deviate from conditions in the jet. However, in slurry flows, a wide range of particle impact speeds and angles are produced in a single slurry jet test with liquid and sand particles. In this study, a novel and combined CFD/experimental method for developing an erosion equation in slurry flows is presented. In this method, a CFD analysis is used to characterize the particle impact speed, angle, and impact rate at specific locations on the test sample. Then, the particle impact data are related to the measured erosion depth to achieve an erosion equation from submerged testing. Traditionally, it was assumed that the erosion equation developed based on gas testing can be used for both gas-sand and liquid-sand flows. The erosion equations developed in this work were implemented in a CFD code, and CFD predictions were validated for various test conditions. It was shown that the erosion equation developed based on slurry tests can significantly improve the local thickness loss prediction in slurry flows. Finally, a generalized erosion equation is proposed which can be used to predict the erosion rate in gas-sand, water-sand and viscous liquid-sand flows with high accuracy. Furthermore, in order to gain a better understanding of the erosion mechanism, a comprehensive experimental study was conducted to investigate the important factors influencing the erosion rate in gas-sand and slurry flows. The wear pattern and total erosion ratio were measured in a direct impingement jet geometry (for both dry impact and submerged impingement jets). The effects of fluid viscosity, abrasive particle size, particle impact speed, jet inclination angle, standoff distance, sand concentration, and exposure time were investigated. Also, the eroded samples were studied with Scanning Electron Microscopy (SEM) to understand the erosion micro-structure. Also, the sand particle impact speed and angle were measured using a Particle Image Velocimetry (PIV) system. The measurements were conducted in two types of erosion testers (gas-solid and liquid-solid impinging jets). The Particle Tracking Velocimetry (PTV) technique was utilized which is capable of tracking individual small particles. Moreover, CFD modeling was performed to predict the particle impact data. Very good agreement between the CFD results and PTV measurements was observed.
Kawanaka, Youhei; Matsumoto, Emiko; Sakamoto, Kazuhiko; Yun, Sun-Ja
2011-02-15
The present study was performed to estimate the contributions of fine and ultrafine particles to the lung deposition of particle-bound mutagens in the atmosphere. This is the first estimation of the respiratory deposition of atmospheric particle-bound mutagens. Direct and S9-mediated mutagenicity of size-fractionated particulate matter (PM) collected at roadside and suburban sites was determined by the Ames test using Salmonella typhimurium strain TA98. Regional deposition efficiencies in the human respiratory tract of direct and S9-mediated mutagens in each size fraction were calculated using the LUDEP computer-based model. The model calculations showed that about 95% of the lung deposition of inhaled mutagens is caused by fine particles for both roadside and suburban atmospheres. Importantly, ultrafine particles were shown to contribute to the deposition of mutagens in the alveolar region of the lung by as much as 29% (+S9) and 26% (-S9) for the roadside atmosphere and 11% (+S9) and 13% (-S9) for the suburban atmosphere, although ultrafine particles contribute very little to the PM mass concentration. These results indicated that ultrafine particles play an important role as carriers of mutagens into the lung. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Richard, R. L.; El-Alaoui, M.; Ashour-Abdalla, M.; Walker, R. J.
2009-04-01
We have modeled the entry of solar energetic particles (SEPs) into the magnetosphere during the November 24-25, 2001 magnetic storm and the trapping of particles in the inner magnetosphere. The study used the technique of following many test particles, protons with energies greater than about 100 keV, in the electric and magnetic fields from a global magnetohydrodynamic (MHD) simulation of the magnetosphere during this storm. SEP protons formed a quasi-trapped and trapped population near and within geosynchronous orbit. Preliminary data comparisons show that the simulation does a reasonably good job of predicting the differential flux measured by geosynchronous spacecraft. Particle trapping took place mainly as a result of particles becoming non-adiabatic and crossing onto closed field lines. Particle flux in the inner magnetosphere increased dramatically as an interplanetary shock impacted and compressed the magnetosphere near 0600 UT, but long term trapping (hours) did not become widespread until about an hour later, during a further compression of the magnetosphere. Trapped and quasi-trapped particles were lost during the simulation by motion through the magnetopause and by precipitation, primarily the former. This caused the particle population near and within geosynchronous orbit to gradually decrease later on during the latter part of the interval.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blau, Peter Julian; Jolly, Brian C
2009-01-01
The objective of this work was to support the development of grinding models for titanium metal-matrix composites (MMCs) by investigating possible relationships between their indentation hardness, low-stress belt abrasion, high-stress belt abrasion, and the surface grinding characteristics. Three Ti-based particulate composites were tested and compared with the popular titanium alloy Ti-6Al-4V. The three composites were a Ti-6Al-4V-based MMC with 5% TiB{sub 2} particles, a Ti-6Al-4V MMC with 10% TiC particles, and a Ti-6Al-4V/Ti-7.5%W binary alloy matrix that contained 7.5% TiC particles. Two types of belt abrasion tests were used: (a) a modified ASTM G164 low-stress loop abrasion test, and (b)more » a higher-stress test developed to quantify the grindability of ceramics. Results were correlated with G-ratios (ratio of stock removed to abrasives consumed) obtained from an instrumented surface grinder. Brinell hardness correlated better with abrasion characteristics than microindentation or scratch hardness. Wear volumes from low-stress and high-stress abrasive belt tests were related by a second-degree polynomial. Grindability numbers correlated with hard particle content but were also matrix-dependent.« less
Test-particle dynamics in general spherically symmetric black hole spacetimes
NASA Astrophysics Data System (ADS)
De Laurentis, Mariafelicia; Younsi, Ziri; Porth, Oliver; Mizuno, Yosuke; Rezzolla, Luciano
2018-05-01
To date, the most precise tests of general relativity have been achieved through pulsar timing, albeit in the weak-field regime. Since pulsars are some of the most precise and stable "clocks" in the Universe, present observational efforts are focused on detecting pulsars in the vicinity of supermassive black holes (most notably in the Galactic Centre), enabling pulsar timing to be used as an extremely precise probe of strong-field gravity. In this paper, a mathematical framework to describe test-particle dynamics in general black-hole spacetimes is presented and subsequently used to study a binary system comprising a pulsar orbiting a black hole. In particular, taking into account the parameterization of a general spherically symmetric black-hole metric, general analytic expressions for both the advance of the periastron and for the orbital period of a massive test particle are derived. Furthermore, these expressions are applied to four representative cases of solutions arising in both general relativity and in alternative theories of gravity. Finally, this framework is applied to the Galactic center S -stars and four distinct pulsar toy models. It is shown that by adopting a fully general-relativistic description of test-particle motion which is independent of any particular theory of gravity, observations of pulsars can help impose better constraints on alternative theories of gravity than is presently possible.
Vertically reciprocating auger
NASA Technical Reports Server (NTRS)
Etheridge, Mark; Morgan, Scott; Fain, Robert; Pearson, Jonathan; Weldi, Kevin; Woodrough, Stephen B., Jr.
1988-01-01
The mathematical model and test results developed for the Vertically Reciprocating Auger (VRA) are summarized. The VRA is a device capable of transporting cuttings that result from below surface drilling. It was developed chiefly for the lunar surface, where conventional fluid flushing while drilling would not be practical. The VRA uses only reciprocating motion and transports material through reflections with the surface above. Particles are reflected forward and land ahead of radially placed fences, which prevent the particles from rolling back down the auger. Three input wave forms are considered to drive the auger. A modified sawtooth wave form was chosen for testing, over a modified square wave or sine wave, due to its simplicity and effectiveness. The three-dimensional mathematical model predicted a sand throughput rate of 0.2667 pounds/stroke, while the actual test setup transported 0.075 pounds/stroke. Based on this result, a correction factor of 0.281 is suggested for a modified sawtooth input.
Micromagnetic Modeling: a Tool for Studying Remanence in Magnetite
NASA Astrophysics Data System (ADS)
ter Maat, G. W.; Fabian, K.; Church, N. S.; McEnroe, S. A.
2017-12-01
Micromagnetic modeling is a useful tool in understanding magnetic particle behavior. The domain state of, and interaction between, particles is influenced by their shape, size and spacing. Rocks contain a collection of grains with varying geometries. This study presents models of true geometries obtained by dual-beam focused ion beam scanning electron microscopy (FIB-SEM). Using focused ion beam nanotomography (FIB-nT) the shape and size of individual grains and their spacing are accurately determined. The particle assemblages discussed here are basalts from the Stardalur volcano in Iceland. The main carrier of the magnetization is oxy-exsolved magnetite which contains extensive microstructures from the micron to nanometer scale. The complex morphologies vary in shape from spherical to elongated to sheet-like shapes with SD to PSD domain states. We investigate large oxy-exsolved magnetite grains as well as smaller oxy-exsolved dendritic grains. The obtained 3D volumes are modeled using finite element micromagnetics software MERRILL, to calculate magnetization structures. By modeling a full hysteresis loop we can observe the complete switching process and visualize the mechanism of the reversal of the magnetization. Micromagnetic simulation of hysteresis loops of grains with varying geometry and spacing shows the magnetization state of, and magnetostatic interaction between, different grains. From the simulations the remanence state of the modeled reconstructed geometry is obtained. Modeling the behavior of separate individual grains is compared with modeling assemblages of grains with varying spacing to study the effect of interaction. The use of realistic geometries of oxy-exsolved magnetite in micromagnetic models allows the examination of the influence of shape, size and spacing on the magnetic properties of single particles, and magnetostatic interactions between them.These parameters are varied and tested to find if there is an increase in remanence-carrying capacity. The use of modeling of the realistic representation of the widespread microstructures allow us to test proposed enhancement of remanence, and more stable paleomagnetic recorders.
A cloud/particle model of the interstellar medium - Galactic spiral structure
NASA Technical Reports Server (NTRS)
Levinson, F. H.; Roberts, W. W., Jr.
1981-01-01
A cloud/particle model for gas flow in galaxies is developed that incorporates cloud-cloud collisions and supernovae as dominant local processes. Cloud-cloud collisions are the main means of dissipation. To counter this dissipation and maintain local dispersion, supernova explosions in the medium administer radial snowplow pushes to all nearby clouds. The causal link between these processes is that cloud-cloud collisions will form stars and that these stars will rapidly become supernovae. The cloud/particle model is tested and used to investigate the gas dynamics and spiral structures in galaxies where these assumptions may be reasonable. Particular attention is given to whether large-scale galactic shock waves, which are thought to underlie the regular well-delineated spiral structure in some galaxies, form and persist in a cloud-supernova dominated interstellar medium; this question is answered in the affirmative.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, Gregory S; Braun, Robert J; Ma, Zhiwen
This project was motivated by the potential of reducible perovskite oxides for high-temperature, thermochemical energy storage (TCES) to provide dispatchable renewable heat for concentrating solar power (CSP) plants. This project sought to identify and characterize perovskites from earth-abundant cations with high reducibility below 1000 °C for coupling TCES of solar energy to super-critical CO2 (s-CO2) plants that operate above temperature limits (< 600 °C) of current molten-salt storage. Specific TCES > 750 kJ/kg for storage cycles between 500 and 900 °C was targeted with a system cost goal of $15/kWhth. To realize feasibility of TCES systems based on reducible perovskites,more » our team focused on designing and testing a lab-scale concentrating solar receiver, wherein perovskite particles capture solar energy by fast O2 release and sensible heating at a thermal efficiency of 90% and wall temperatures below 1100 °C. System-level models of the receiver and reoxidation reactor coupled to validated thermochemical materials models can assess approaches to scale-up a full TCES system based on reduction/oxidation cycles of perovskite oxides at large scales. After characterizing many Ca-based perovskites for TCES, our team identified strontium-doped calcium manganite Ca1-xSrxMnO3-δ (with x ≤ 0.1) as a composition with adequate stability and specific TCES capacity (> 750 kJ/kg for Ca0.95Sr0.05MnO3-δ) for cycling between air at 500 °C and low-PO2 (10-4 bar) N2 at 900 °C. Substantial kinetic tests demonstrated that resident times of several minutes in low-PO2 gas were needed for these materials to reach the specific TCES goals with particles of reasonable size for large-scale transport (diameter dp > 200 μm). On the other hand, fast reoxidation kinetics in air enables subsequent rapid heat release in a fluidized reoxidation reactor/ heat recovery unit for driving s-CO2 power plants. Validated material thermochemistry coupled to radiation and convective particle-gas transport models facilitated full TCES system analysis for CSP and results showed that receiver efficiencies approaching 85% were feasible with wall-to-particle heat transfer coefficients observed in laboratory experiments. Coupling these reactive particle-gas transport models to external SolTrace and CFD models drove design of a reactive-particle receiver with indirect heating through flux spreading. A lab-scale receiver using Ca0.9Sr0.1MnO3-δ was demonstrated at NREL’s High Flux Solar Furnace with particle temperatures reaching 900 °C while wall temperatures remained below 1100 °C and approximately 200 kJ/kg of chemical energy storage. These first demonstrations of on-sun perovskite reduction and the robust modeling tools from this program provide a basis for going forward with improved receiver designs to increase heat fluxes and solar-energy capture efficiencies. Measurements and modeling tools from this project provide the foundations for advancing TCES for CSP and other applications using reducible perovskite oxides from low-cost, earth-abundant elements. A perovskite composition has been identified that has the thermodynamic potential to meet the targeted TCES capacity of 750 kJ/kg over a range of temperatures amenable for integration with s-CO2 cycles. Further research needs to explore ways of accelerating effective particle kinetics through variations in composition and/or reactor/receiver design. Initial demonstrations of on-sun particle reduction for TCES show a need for testing at larger scales with reduced heat losses and improved particle-wall heat transfer. The gained insight into particle-gas transport and reactor design can launch future development of cost-effective, large-scale particle-based TCES as a technology for enabling increased renewable energy penetration.« less
A hydrodynamic model for granular material flows including segregation effects
NASA Astrophysics Data System (ADS)
Gilberg, Dominik; Klar, Axel; Steiner, Konrad
2017-06-01
The simulation of granular flows including segregation effects in large industrial processes using particle methods is accurate, but very time-consuming. To overcome the long computation times a macroscopic model is a natural choice. Therefore, we couple a mixture theory based segregation model to a hydrodynamic model of Navier-Stokes-type, describing the flow behavior of the granular material. The granular flow model is a hybrid model derived from kinetic theory and a soil mechanical approach to cover the regime of fast dilute flow, as well as slow dense flow, where the density of the granular material is close to the maximum packing density. Originally, the segregation model has been formulated by Thornton and Gray for idealized avalanches. It is modified and adapted to be in the preferred form for the coupling. In the final coupled model the segregation process depends on the local state of the granular system. On the other hand, the granular system changes as differently mixed regions of the granular material differ i.e. in the packing density. For the modeling process the focus lies on dry granular material flows of two particle types differing only in size but can be easily extended to arbitrary granular mixtures of different particle size and density. To solve the coupled system a finite volume approach is used. To test the model the rotational mixing of small and large particles in a tumbler is simulated.
NASA Technical Reports Server (NTRS)
Colver, Gerald M.; Greene, Nathanael; Shoemaker, David; Xu, Hua
2003-01-01
The Electric Particulate Suspension (EPS) is a combustion ignition system being developed at Iowa State University for evaluating quenching effects of powders in microgravity (quenching distance, ignition energy, flammability limits). Because of the high cloud uniformity possible and its simplicity, the EPS method has potential for "benchmark" design of quenching flames that would provide NASA and the scientific community with a new fire standard. Microgravity is expected to increase suspension uniformity even further and extend combustion testing to higher concentrations (rich fuel limit) than is possible at normal gravity. Two new combustion parameters are being investigated with this new method: (1) the particle velocity distribution and (2) particle-oxidant slip velocity. Both walls and (inert) particles can be tested as quenching media. The EPS method supports combustion modeling by providing accurate measurement of flame-quenching distance as a parameter in laminar flame theory as it closely relates to characteristic flame thickness and flame structure. Because of its design simplicity, EPS is suitable for testing on the International Space Station (ISS). Laser scans showing stratification effects at 1-g have been studied for different materials, aluminum, glass, and copper. PTV/PIV and a leak hole sampling rig give particle velocity distribution with particle slip velocity evaluated using LDA. Sample quenching and ignition energy curves are given for aluminum powder. Testing is planned for the KC-135 and NASA s two second drop tower. Only 1-g ground-based data have been reported to date.
1975-10-01
63 29 Variation of Profile Shape with Time for Axisyinmetric Camphor Models 63 30 The Development of Ablated Nose Shapes Over Which Flow...ablation tests using camphor models and inferred from downrange observation of full scale flight missions. Regions of gross instability on nose...been verified in wind tunnel tests of camphor models where shapes similar to those shown on Figure 29 can be developed under transitional conditions
Blended particle filters for large-dimensional chaotic dynamical systems
Majda, Andrew J.; Qi, Di; Sapsis, Themistoklis P.
2014-01-01
A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below. PMID:24825886
Extension of the XGC code for global gyrokinetic simulations in stellarator geometry
NASA Astrophysics Data System (ADS)
Cole, Michael; Moritaka, Toseo; White, Roscoe; Hager, Robert; Ku, Seung-Hoe; Chang, Choong-Seock
2017-10-01
In this work, the total-f, gyrokinetic particle-in-cell code XGC is extended to treat stellarator geometries. Improvements to meshing tools and the code itself have enabled the first physics studies, including single particle tracing and flux surface mapping in the magnetic geometry of the heliotron LHD and quasi-isodynamic stellarator Wendelstein 7-X. These have provided the first successful test cases for our approach. XGC is uniquely placed to model the complex edge physics of stellarators. A roadmap to such a global confinement modeling capability will be presented. Single particle studies will include the physics of energetic particles' global stochastic motions and their effect on confinement. Good confinement of energetic particles is vital for a successful stellarator reactor design. These results can be compared in the core region with those of other codes, such as ORBIT3d. In subsequent work, neoclassical transport and turbulence can then be considered and compared to results from codes such as EUTERPE and GENE. After sufficient verification in the core region, XGC will move into the stellarator edge region including the material wall and neutral particle recycling.
Particle Acceleration at a Twin CME at 1 AU
NASA Astrophysics Data System (ADS)
Parker, L. N.; Li, G.
2017-12-01
We present results from both the Particle Acceleration and Transport in the Heliosphere (PATH) and Particle Acceleration at Multiple Shocks (PAMS) models for a twin CME scenario. The PATH model follows a CME using a numerical MHD module and solves the Parker transport equation at the shock yielding the accelerated particle spectrum, while PAMS solves the steady-state cosmic ray transport equation at an individual shock analytically to yield the diffusive shock acceleration (DSA) spectrum. We address the injection of an upstream particle distribution into the acceleration process for a two shock system at 1 AU. Only those particles that exceed a theoretically motivated prescribed injection energy, Einj, and up to a maximum injection energy (Emax) appropriate for quasi-parallel and quasi-perpendicular shocks (Zank et al., 2000, 2006; Dosch and Shalchi, 2010) are injected. Results from PAMS are then compared to observations at 1 AU from the Advanced Composition Explorer (ACE) spacecraft. In addition, we test the concept of electron acceleration at low injection energies for a single and multiple shock system using the same method as in Neergaard Parker and Zank, 2012 and Neergaard Parker et al., 2014.
A deterministic Lagrangian particle separation-based method for advective-diffusion problems
NASA Astrophysics Data System (ADS)
Wong, Ken T. M.; Lee, Joseph H. W.; Choi, K. W.
2008-12-01
A simple and robust Lagrangian particle scheme is proposed to solve the advective-diffusion transport problem. The scheme is based on relative diffusion concepts and simulates diffusion by regulating particle separation. This new approach generates a deterministic result and requires far less number of particles than the random walk method. For the advection process, particles are simply moved according to their velocity. The general scheme is mass conservative and is free from numerical diffusion. It can be applied to a wide variety of advective-diffusion problems, but is particularly suited for ecological and water quality modelling when definition of particle attributes (e.g., cell status for modelling algal blooms or red tides) is a necessity. The basic derivation, numerical stability and practical implementation of the NEighborhood Separation Technique (NEST) are presented. The accuracy of the method is demonstrated through a series of test cases which embrace realistic features of coastal environmental transport problems. Two field application examples on the tidal flushing of a fish farm and the dynamics of vertically migrating marine algae are also presented.
Towards a bulk approach to local interactions of hydrometeors
NASA Astrophysics Data System (ADS)
Baumgartner, Manuel; Spichtinger, Peter
2018-02-01
The growth of small cloud droplets and ice crystals is dominated by the diffusion of water vapor. Usually, Maxwell's approach to growth for isolated particles is used in describing this process. However, recent investigations show that local interactions between particles can change diffusion properties of cloud particles. In this study we develop an approach for including these local interactions into a bulk model approach. For this purpose, a simplified framework of local interaction is proposed and governing equations are derived from this setup. The new model is tested against direct simulations and incorporated into a parcel model framework. Using the parcel model, possible implications of the new model approach for clouds are investigated. The results indicate that for specific scenarios the lifetime of cloud droplets in subsaturated air may be longer (e.g., for an initially water supersaturated air parcel within a downdraft). These effects might have an impact on mixed-phase clouds, for example in terms of riming efficiencies.
An LES-PBE-PDF approach for modeling particle formation in turbulent reacting flows
NASA Astrophysics Data System (ADS)
Sewerin, Fabian; Rigopoulos, Stelios
2017-10-01
Many chemical and environmental processes involve the formation of a polydispersed particulate phase in a turbulent carrier flow. Frequently, the immersed particles are characterized by an intrinsic property such as the particle size, and the distribution of this property across a sample population is taken as an indicator for the quality of the particulate product or its environmental impact. In the present article, we propose a comprehensive model and an efficient numerical solution scheme for predicting the evolution of the property distribution associated with a polydispersed particulate phase forming in a turbulent reacting flow. Here, the particulate phase is described in terms of the particle number density whose evolution in both physical and particle property space is governed by the population balance equation (PBE). Based on the concept of large eddy simulation (LES), we augment the existing LES-transported probability density function (PDF) approach for fluid phase scalars by the particle number density and obtain a modeled evolution equation for the filtered PDF associated with the instantaneous fluid composition and particle property distribution. This LES-PBE-PDF approach allows us to predict the LES-filtered fluid composition and particle property distribution at each spatial location and point in time without any restriction on the chemical or particle formation kinetics. In view of a numerical solution, we apply the method of Eulerian stochastic fields, invoking an explicit adaptive grid technique in order to discretize the stochastic field equation for the number density in particle property space. In this way, sharp moving features of the particle property distribution can be accurately resolved at a significantly reduced computational cost. As a test case, we consider the condensation of an aerosol in a developed turbulent mixing layer. Our investigation not only demonstrates the predictive capabilities of the LES-PBE-PDF model but also indicates the computational efficiency of the numerical solution scheme.
NASA Astrophysics Data System (ADS)
Stöckl, Stefan; Rotach, Mathias W.; Kljun, Natascha
2018-01-01
We discuss the results of Gibson and Sailor (Boundary-Layer Meteorol 145:399-406, 2012) who suggest several corrections to the mathematical formulation of the Lagrangian particle dispersion model of Rotach et al. (Q J R Meteorol Soc 122:367-389, 1996). While most of the suggested corrections had already been implemented in the 1990s, one suggested correction raises a valid point, but results in a violation of the well-mixed criterion. Here we improve their idea and test the impact on model results using a well-mixed test and a comparison with wind-tunnel experimental data. The new approach results in similar dispersion patterns as the original approach, while the approach suggested by Gibson and Sailor leads to erroneously reduced concentrations near the ground in convective and especially forced convective conditions.
NASA Astrophysics Data System (ADS)
Sorathia, K.; Ukhorskiy, A. Y.; Merkin, V. G.; Wiltberger, M. J.; Lyon, J.; Claudepierre, S. G.; Fennell, J. F.
2017-12-01
During geomagnetic storms the intensities of radiation belt electrons exhibit dramatic variability. In the main phase electron intensities exhibit deep depletion over a broad region of the outer belt. The intensities then increase during the recovery phase, often to levels that significantly exceed their pre-storm values. In this study we analyze the depletion, recovery and enhancement of radiation belt intensities during the 2013 St. Patrick's geomagnetic storm. We simulate the dynamics of high-energy electrons using our newly-developed test-particle radiation belt model (CHIMP) based on a hybrid guiding-center/Lorentz integrator and electromagnetic fields derived from high-resolution global MHD (LFM) simulations. Our approach differs from previous work in that we use MHD flow information to identify and seed test-particles into regions of strong convection in the magnetotail. We address two science questions: 1) what are the relative roles of magnetopause losses, transport-driven atmospheric precipitation, and adiabatic cooling in the radiation belt depletion during the storm main phase? and 2) to what extent can enhanced convection/mesoscale injections account for the radiation belt buildup during the recovery phase? Our analysis is based on long-term model simulation and the comparison of our model results with electron intensity measurements from the MAGEIS experiment of the Van Allen Probes mission.
NASA Astrophysics Data System (ADS)
OBrien, L. E.; Gemer, A.; Gruen, E.; Collette, A.; Horanyi, M.; Moebius, E.; Auer, S.; Juhasz, A.; Srama, R.; Sternovsky, Z.
2012-12-01
We report the development of the Nano-Dust Analyzer (NDA) instrument and the results from the first laboratory testing and calibration. The two STEREO spacecrafts have indicated that nano-sized dust particles, potentially with very high flux, are delivered to 1 AU from the inner solar system [Meyer-Vernet, N. et al., Solar Physics, 256, 463, 2009]. These particles are generated by collisional grinding or evaporation near the Sun and subsequently accelerated outward by the solar wind. The temporal variability and directionality are governed by conditions in the inner heliosphere and the mass analysis of the particles reveals the chemical differentiation of solid matter near the Sun. NDA is a highly sensitive dust analyzer that is developed under NASA's Heliophysics program. NDA is a linear time-of-flight mass analyzer that modeled after Cosmic Dust Analyzer (CDA) on Cassini and the more recent Lunar Dust EXperiment (LDEX) for the upcoming LADEE mission to the Moon. The ion optics of the instrument is optimized through numerical modeling. By applying technologies implemented in solar wind instruments and coronagraphs, the highly sensitive dust analyzer will be able to be pointed towards the solar direction. A laboratory prototype is built and tested and calibrated at the dust accelerator facility at the University of Colorado, Boulder, using particles with from 1 to over 50 km/s velocity.
Lee, Kangtaek; Choi, Heon-Sik; Kim, Ju-Young; Ahn, Ik-Sung
2003-12-12
Sorption of micelle-like amphiphilic polyurethane (APU) particles to soil was studied and compared to that of a model anionic surfactant, sodium dodecyl sulfate (SDS). Three types of APU particles with different hydrophobicity were synthesized from urethane acrylate anionomers (UAA) and used in this study. Due to the chemically cross-linked structure, APU exhibited less sorption to the soil than SDS and a greater reduction in the sorption of phenanthrene, a model soil contaminant, to the soil was observed in the presence of APU than SDS even though the solubility of phenanthrene was higher in the presence of SDS than APU. A mathematical model was developed to describe the phenanthrene distribution between soil and an aqueous phase containing APU particles. The sorption of phenanthrene to the test soil could be well described by Linear isotherm. APU sorption to the soil was successfully described by Langmuir and Freundlich isotherms. The partition of phenanthrene between water and APU were successfully explained with a single partition coefficient. The model, which accounts for the limited solubilization of phenanthrene in sorbed APU particles, successfully described the experimental data for the distribution of phenanthrene between the soil and the aqueous phase in the presence of APU.
NASA Astrophysics Data System (ADS)
Appel, J. K.; Köehler, J.; Guo, J.; Ehresmann, B.; Zeitlin, C.; Matthiä, D.; Lohf, H.; Wimmer-Schweingruber, R. F.; Hassler, D.; Brinza, D. E.; Böhm, E.; Böttcher, S.; Martin, C.; Burmeister, S.; Reitz, G.; Rafkin, S.; Posner, A.; Peterson, J.; Weigle, G.
2018-01-01
The Mars Science Laboratory rover Curiosity, operating on the surface of Mars, is exposed to radiation fluxes from above and below. Galactic Cosmic Rays travel through the Martian atmosphere, producing a modified spectrum consisting of both primary and secondary particles at ground level. These particles produce an upward directed secondary particle spectrum as they interact with the Martian soil. Here we develop a method to distinguish the upward and downward directed particle fluxes in the Radiation Assessment Detector (RAD) instrument, verify it using data taken during the cruise to Mars, and apply it to data taken on the Martian surface. We use a combination of Geant4 and Planetocosmics modeling to find discrimination criteria for the flux directions. After developing models of the cruise phase and surface shielding conditions, we compare model-predicted values for the ratio of upward to downward flux with those found in RAD observation data. Given the quality of available information on Mars Science Laboratory spacecraft and rover composition, we find generally reasonable agreement between our models and RAD observation data. This demonstrates the feasibility of the method developed and tested here. We additionally note that the method can also be used to extend the measurement range and capabilities of the RAD instrument to higher energies.
2010-01-01
Background The difficulty of directly measuring cellular dose is a significant obstacle to application of target tissue dosimetry for nanoparticle and microparticle toxicity assessment, particularly for in vitro systems. As a consequence, the target tissue paradigm for dosimetry and hazard assessment of nanoparticles has largely been ignored in favor of using metrics of exposure (e.g. μg particle/mL culture medium, particle surface area/mL, particle number/mL). We have developed a computational model of solution particokinetics (sedimentation, diffusion) and dosimetry for non-interacting spherical particles and their agglomerates in monolayer cell culture systems. Particle transport to cells is calculated by simultaneous solution of Stokes Law (sedimentation) and the Stokes-Einstein equation (diffusion). Results The In vitro Sedimentation, Diffusion and Dosimetry model (ISDD) was tested against measured transport rates or cellular doses for multiple sizes of polystyrene spheres (20-1100 nm), 35 nm amorphous silica, and large agglomerates of 30 nm iron oxide particles. Overall, without adjusting any parameters, model predicted cellular doses were in close agreement with the experimental data, differing from as little as 5% to as much as three-fold, but in most cases approximately two-fold, within the limits of the accuracy of the measurement systems. Applying the model, we generalize the effects of particle size, particle density, agglomeration state and agglomerate characteristics on target cell dosimetry in vitro. Conclusions Our results confirm our hypothesis that for liquid-based in vitro systems, the dose-rates and target cell doses for all particles are not equal; they can vary significantly, in direct contrast to the assumption of dose-equivalency implicit in the use of mass-based media concentrations as metrics of exposure for dose-response assessment. The difference between equivalent nominal media concentration exposures on a μg/mL basis and target cell doses on a particle surface area or number basis can be as high as three to six orders of magnitude. As a consequence, in vitro hazard assessments utilizing mass-based exposure metrics have inherently high errors where particle number or surface areas target cells doses are believed to drive response. The gold standard for particle dosimetry for in vitro nanotoxicology studies should be direct experimental measurement of the cellular content of the studied particle. However, where such measurements are impractical, unfeasible, and before such measurements become common, particle dosimetry models such as ISDD provide a valuable, immediately useful alternative, and eventually, an adjunct to such measurements. PMID:21118529
NASA Astrophysics Data System (ADS)
Hernandez, F.; Liang, X.
2017-12-01
Reliable real-time hydrological forecasting, to predict important phenomena such as floods, is invaluable to the society. However, modern high-resolution distributed models have faced challenges when dealing with uncertainties that are caused by the large number of parameters and initial state estimations involved. Therefore, to rely on these high-resolution models for critical real-time forecast applications, considerable improvements on the parameter and initial state estimation techniques must be made. In this work we present a unified data assimilation algorithm called Optimized PareTo Inverse Modeling through Inverse STochastic Search (OPTIMISTS) to deal with the challenge of having robust flood forecasting for high-resolution distributed models. This new algorithm combines the advantages of particle filters and variational methods in a unique way to overcome their individual weaknesses. The analysis of candidate particles compares model results with observations in a flexible time frame, and a multi-objective approach is proposed which attempts to simultaneously minimize differences with the observations and departures from the background states by using both Bayesian sampling and non-convex evolutionary optimization. Moreover, the resulting Pareto front is given a probabilistic interpretation through kernel density estimation to create a non-Gaussian distribution of the states. OPTIMISTS was tested on a low-resolution distributed land surface model using VIC (Variable Infiltration Capacity) and on a high-resolution distributed hydrological model using the DHSVM (Distributed Hydrology Soil Vegetation Model). In the tests streamflow observations are assimilated. OPTIMISTS was also compared with a traditional particle filter and a variational method. Results show that our method can reliably produce adequate forecasts and that it is able to outperform those resulting from assimilating the observations using a particle filter or an evolutionary 4D variational method alone. In addition, our method is shown to be efficient in tackling high-resolution applications with robust results.
Particle bombardment - mediated gene transfer and GFP transient expression in Seteria viridis.
Mookkan, Muruganantham
2018-04-03
Setaria viridis is one of the most important model grasses in studying monocot plant biology. Transient gene expression study is a very important tool in plant biotechnology, functional genomics, and CRISPR-Cas9 genome editing technology via particle bombardment. In this study, a particle bombardment-mediated protocol was developed to introduce DNA into Setaria viridis in vitro leaf explants. In addition, physical and biological parameters, such as helium pressure, distance from stopping screen to the target tissues, DNA concentration, and number of bombardments, were tested and optimized. Optimum concentration of transient GFP expression was achieved using 1.5 ug plasmid DNA with 0.6 mm gold particles and 6 cm bombardment distance, using 1,100 psi. Doubling the bombardment instances provides the maximum number of foci of transient GFP expression. This simple protocol will be helpful for genomics studies in the S. viridis monocot model.
Rotational Brownian Dynamics simulations of clathrin cage formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilie, Ioana M.; Briels, Wim J.; MESA+ Institute for Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede
2014-08-14
The self-assembly of nearly rigid proteins into ordered aggregates is well suited for modeling by the patchy particle approach. Patchy particles are traditionally simulated using Monte Carlo methods, to study the phase diagram, while Brownian Dynamics simulations would reveal insights into the assembly dynamics. However, Brownian Dynamics of rotating anisotropic particles gives rise to a number of complications not encountered in translational Brownian Dynamics. We thoroughly test the Rotational Brownian Dynamics scheme proposed by Naess and Elsgaeter [Macromol. Theory Simul. 13, 419 (2004); Naess and Elsgaeter Macromol. Theory Simul. 14, 300 (2005)], confirming its validity. We then apply the algorithmmore » to simulate a patchy particle model of clathrin, a three-legged protein involved in vesicle production from lipid membranes during endocytosis. Using this algorithm we recover time scales for cage assembly comparable to those from experiments. We also briefly discuss the undulatory dynamics of the polyhedral cage.« less
Calibration and field application of a Sierra Model 235 cascade impactor.
Knuth, R H
1984-06-01
A Sierra Model 235 slotted impactor was used to measure the particle size distribution of ore dust in uranium concentrating mills. The impactor was calibrated at a flow rate of 0.21 m3/min, using solid monodisperse particles of methylene blue and an impaction surface of Whatman #41 filter paper soaked in mineral oil. The reduction from the impactor's design flow rate of 1.13 m3/min (40 cfm) to 0.21 m3/min (7.5 cfm), a necessary adjustment because of the anticipated large particles sizes of ore dust, increased the stage cut-off diameters by an average factor of 2.3. Evaluation of field test results revealed that the underestimation of mass median diameters, often caused by the rebound and reentrainment of solid particles from dry impaction surfaces, was virtually eliminated by using the oiled Whatman #41 impaction surface.
Bartling, Soenke H; Budjan, Johannes; Aviv, Hagit; Haneder, Stefan; Kraenzlin, Bettina; Michaely, Henrik; Margel, Shlomo; Diehl, Steffen; Semmler, Wolfhard; Gretz, Norbert; Schönberg, Stefan O; Sadick, Maliha
2011-03-01
Embolization therapy is gaining importance in the treatment of malignant lesions, and even more in benign lesions. Current embolization materials are not visible in imaging modalities. However, it is assumed that directly visible embolization material may provide several advantages over current embolization agents, ranging from particle shunt and reflux prevention to improved therapy control and follow-up assessment. X-ray- as well as magnetic resonance imaging (MRI)-visible embolization materials have been demonstrated in experiments. In this study, we present an embolization material with the property of being visible in more than one imaging modality, namely MRI and x-ray/computed tomography (CT). Characterization and testing of the substance in animal models was performed. To reduce the chance of adverse reactions and to facilitate clinical approval, materials have been applied that are similar to those that are approved and being used on a routine basis in diagnostic imaging. Therefore, x-ray-visible Iodine was combined with MRI-visible Iron (Fe3O4) in a macroparticle (diameter, 40-200 μm). Its core, consisting of a copolymerized monomer MAOETIB (2-methacryloyloxyethyl [2,3,5-triiodobenzoate]), was coated with ultra-small paramagnetic iron oxide nanoparticles (150 nm). After in vitro testing, including signal to noise measurements in CT and MRI (n = 5), its ability to embolize tissue was tested in an established tumor embolization model in rabbits (n = 6). Digital subtraction angiography (DSA) (Integris, Philips), CT (Definition, Siemens Healthcare Section, Forchheim, Germany), and MRI (3 Tesla Magnetom Tim Trio MRI, Siemens Healthcare Section, Forchheim, Germany) were performed before, during, and after embolization. Imaging signal changes that could be attributed to embolization particles were assessed by visual inspection and rated on an ordinal scale by 3 radiologists, from 1 to 3. Histologic analysis of organs was performed. Particles provided a sufficient image contrast on DSA, CT (signal to noise [SNR], 13 ± 2.5), and MRI (SNR, 35 ± 1) in in vitro scans. Successful embolization of renal tissue was confirmed by catheter angiography, revealing at least partial perfusion stop in all kidneys. Signal changes that were attributed to particles residing within the kidney were found in all cases in all the 3 imaging modalities. Localization distribution of particles corresponded well in all imaging modalities. Dynamic imaging during embolization provided real-time monitoring of the inflow of embolization particles within DSA, CT, and MRI. Histologic visualization of the residing particles as well as associated thrombosis in renal arteries could be performed. Visual assessment of the likelihood of embolization particle presence received full rating scores (153/153) after embolization. Multimodal-visible embolization particles have been developed, characterized, and tested in vivo in an animal model. Their implementation in clinical radiology may provide optimization of embolization procedures with regard to prevention of particle misplacement and direct intraprocedural visualization, at the same time improving follow-up examinations by utilizing the complementary characteristics of CT and MRI. Radiation dose savings can also be considered. All these advantages could contribute to future refinements and improvements in embolization therapy. Additionally, new approaches in embolization research may open up.
Valladares, Roberto D; Nich, Christophe; Zwingenberger, Stefan; Li, Chenguang; Swank, Katherine R; Gibon, Emmanuel; Rao, Allison J; Yao, Zhenyu; Goodman, Stuart B
2014-09-01
Aseptic loosening secondary to particle-associated periprosthetic osteolysis remains a major cause of failure of total joint replacements (TJR) in the mid- and long term. As sentinels of the innate immune system, macrophages are central to the recognition and initiation of the inflammatory cascade, which results in the activation of bone resorbing osteoclasts. Toll-like receptors (TLRs) are involved in the recognition of pathogen-associated molecular patterns and danger-associated molecular patterns. Experimentally, polymethylmethacrylate and polyethylene (PE) particles have been shown to activate macrophages via the TLR pathway. The specific TLRs involved in PE particle-induced osteolysis remain largely unknown. We hypothesized that TLR-2, -4, and -9 mediated responses play a critical role in the development of PE wear particle-induced osteolysis in the murine calvarium model. To test this hypothesis, we first demonstrated that PE particles caused observable osteolysis, visible by microCT and bone histomorphometry when the particles were applied to the calvarium of C57BL/6 mice. The number of TRAP positive osteoclasts was significantly greater in the PE-treated group when compared to the control group without particles. Finally, using immunohistochemistry, TLR-2 and TLR-4 were highly expressed in PE particle-induced osteolytic lesions, whereas TLR-9 was downregulated. TLR-2 and -4 may represent novel therapeutic targets for prevention of wear particle-induced osteolysis and accompanying TJR failure. © 2013 Wiley Periodicals, Inc.
Final Progress Report: Internship at Los Alamos National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunham, Ryan Q.
2012-08-10
Originally I was tasked fluidized bed modeling, however, I changed projects. While still working with ANSYS Fluent, I performed a study of particle tracks in glove boxes. This is useful from a Health-Physics perspective, dealing respirable particles that can be hazardous to the human body. I iteratively tested different amounts of turbulent particles in a steady-state flow. The goal of this testing was to discover how Fluent handles built-in Rosin-Rammler distributions for particle injections. I worked on the health physics flow problems and distribution analysis under the direction of two mentors, Bruce Letellier and Dave Decroix. I set up andmore » ran particle injection calculations using Fluent. I tried different combinations of input parameters to produce sets of 500,000, 1 million, and 1.5 million particles to determine what a good test case would be for future experiments. I performed a variety of tasks in my work as an Undergraduate Student Intern at LANL this summer, and learned how to use a powerful CFD application in addition to expanding my skills in MATLAB. I enjoyed my work at LANL and hope to be able to use the experience here to further my career in the future working in a security-conscious environment. My mentors provided guidance and help with all of my projects and I am grateful for the opportunity to work at Los Alamos National Laboratory.« less
Particle motions beneath irrotational water waves
NASA Astrophysics Data System (ADS)
Bakhoday-Paskyabi, Mostafa
2015-08-01
Neutral and buoyant particle motions in an irrotational flow are investigated under the passage of linear, nonlinear gravity, and weakly nonlinear solitary waves at a constant water depth. The developed numerical models for the particle trajectories in a non-turbulent flow incorporate particle momentum, size, and mass (i.e., inertial particles) under the influence of various surface waves such as Korteweg-de Vries waves which admit a three parameter family of periodic cnoidal wave solutions. We then formulate expressions of mass-transport velocities for the neutral and buoyant particles. A series of test cases suggests that the inertial particles possess a combined horizontal and vertical drifts from the locations of their release, with a fall velocity as a function of particle material properties, ambient flow, and wave parameters. The estimated solutions exhibit good agreement with previously explained particle behavior beneath progressive surface gravity waves. We further investigate the response of a neutrally buoyant water parcel trajectories in a rotating fluid when subjected to a series of wind and wave events. The results confirm the importance of the wave-induced Coriolis-Stokes force effect in both amplifying (destroying) the pre-existing inertial oscillations and in modulating the direction of the flow particles. Although this work has mainly focused on wave-current-particle interaction in the absence of turbulence stochastic forcing effects, the exercise of the suggested numerical models provides additional insights into the mechanisms of wave effects on the passive trajectories for both living and nonliving particles such as swimming trajectories of plankton in non-turbulent flows.
Marshall Space Flight Center's Impact Testing Facility Capabilities
NASA Technical Reports Server (NTRS)
Finchum, Andy; Hubbs, Whitney; Evans, Steve
2008-01-01
Marshall Space Flight Center s (MSFC) Impact Testing Facility (ITF) serves as an important installation for space and missile related materials science research. The ITF was established and began its research in spacecraft debris shielding in the early 1960s, then played a major role in the International Space Station debris shield development. As NASA became more interested in launch debris and in-flight impact concerns, the ITF grew to include research in a variety of impact genres. Collaborative partnerships with the DoD led to a wider range of impact capabilities being relocated to MSFC as a result of the closure of Particle Impact Facilities in Santa Barbara, California. The Particle Impact Facility had a 30 year history in providing evaluations of aerospace materials and components during flights through rain, ice, and solid particle environments at subsonic through hypersonic velocities. The facility s unique capabilities were deemed a "National Asset" by the DoD. The ITF now has capabilities including environmental, ballistic, and hypervelocity impact testing utilizing an array of air, powder, and two-stage light gas guns to accommodate a variety of projectile and target types and sizes. Numerous upgrades including new instrumentation, triggering circuitry, high speed photography, and optimized sabot designs have been implemented. Other recent research has included rain drop demise characterization tests to obtain data for inclusion in on-going model development. The current and proposed ITF capabilities range from rain to micrometeoroids allowing the widest test parameter range possible for materials investigations in support of space, atmospheric, and ground environments. These test capabilities including hydrometeor, single/multi-particle, ballistic gas guns, exploding wire gun, and light gas guns combined with Smooth Particle Hydrodynamics Code (SPHC) simulations represent the widest range of impact test capabilities in the country.
Testing chameleon theories with light propagating through a magnetic field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brax, Philippe; Bruck, Carsten van de; Davis, Anne-Christine
2007-10-15
It was recently argued that the observed PVLAS anomaly can be explained by chameleon field theories in which large deviations from Newton's law can be avoided. Here we present the predictions for the dichroism and the birefringence induced in the vacuum by a magnetic field in these models. We show that chameleon particles behave very differently from standard axionlike particles (ALPs). We find that, unlike ALPs, the chameleon particles are confined within the experimental setup. As a consequence, the birefringence is always bigger than the dichroism in PVLAS-type experiments.
Searching for new physics with three-particle correlations in pp collisions at the LHC
NASA Astrophysics Data System (ADS)
Sanchis-Lozano, Miguel-Angel; Sarkisyan-Grinbaum, Edward K.
2018-06-01
New phenomena involving pseudorapidity and azimuthal correlations among final-state particles in pp collisions at the LHC can hint at the existence of hidden sectors beyond the Standard Model. In this paper we rely on a correlated-cluster picture of multiparticle production, which was shown to account for the ridge effect, to assess the effect of a hidden sector on three-particle correlations concluding that there is a potential signature of new physics that can be directly tested by experiments using well-known techniques.
Space Particle Hazard Measurement and Modeling
2016-09-01
understand the interactions of the physical processes driving, then specify and ultimately predict the state of the energetic particle populations...Hudson, and B. T. Kress (2013), Direct observation of the CRAND proton radiation belt source, J. Geophys. Res. Space Physics , 118, doi:10.1002...anticritical temperature for spacecraft charging, J. Geophys Res.: Space Physics , 113, 2156-2202, doi: 10.1029/2008JA013161 2010 – Tested basic
Dos Santos, Claudio T; Barbosa, Cassio; Monteiro, Maurício J; Abud, Ibrahim C; Caminha, Ieda M V; Roesler, Carlos R M
2016-09-01
Modular hip prostheses are flexible to match anatomical variations and to optimize mechanical and tribological properties of each part by using different materials. However, micromotions associated with the modular components can lead to fretting corrosion and, consequently, to release of debris which can cause adverse local tissue reactions in human body. In the present study, the surface damage and residues released during in vitro fretting corrosion tests were characterized by stereomicroscope, SEM and EDS. Two models of modular hip prosthesis were studied: Model SS/Ti Cementless whose stem was made of ASTM F136 Ti-6Al-4V alloy and whose metallic head was made of ASTM F138 austenitic stainless steel, and Model SS/SS Cemented with both components made of ASTM F138 stainless steel. The fretting corrosion tests were evaluated according to the criteria of ASTM F1875 standard. Micromotions during the test caused mechanical wear and material loss in the head-taper interface, resulting in fretting-corrosion. Model SS/SS showed higher grade of corrosion. Different morphologies of debris predominated in each model studied. Small and agglomerated particles were observed in the Model SS/Ti and irregular particles in the Model SS/SS. After 10 million cycles, the Model SS/Ti was more resistant to fretting corrosion than the Model SS/SS. Copyright © 2016 Elsevier Ltd. All rights reserved.
Onion-shell model of cosmic ray acceleration in supernova remnants
NASA Technical Reports Server (NTRS)
Bogdan, T. J.; Volk, H. J.
1983-01-01
A method is devised to approximate the spatially averaged momentum distribution function for the accelerated particles at the end of the active lifetime of a supernova remnant. The analysis is confined to the test particle approximation and adiabatic losses are oversimplified, but unsteady shock motion, evolving shock strength, and non-uniform gas flow effects on the accelerated particle spectrum are included. Monoenergetic protons are injected at the shock front. It is found that the dominant effect on the resultant accelerated particle spectrum is a changing spectral index with shock strength. High energy particles are produced in early phases, and the resultant distribution function is a slowly varying power law over several orders of magnitude, independent of the specific details of the supernova remnant.
Surrogate-driven deformable motion model for organ motion tracking in particle radiation therapy
NASA Astrophysics Data System (ADS)
Fassi, Aurora; Seregni, Matteo; Riboldi, Marco; Cerveri, Pietro; Sarrut, David; Battista Ivaldi, Giovanni; Tabarelli de Fatis, Paola; Liotta, Marco; Baroni, Guido
2015-02-01
The aim of this study is the development and experimental testing of a tumor tracking method for particle radiation therapy, providing the daily respiratory dynamics of the patient’s thoraco-abdominal anatomy as a function of an external surface surrogate combined with an a priori motion model. The proposed tracking approach is based on a patient-specific breathing motion model, estimated from the four-dimensional (4D) planning computed tomography (CT) through deformable image registration. The model is adapted to the interfraction baseline variations in the patient’s anatomical configuration. The driving amplitude and phase parameters are obtained intrafractionally from a respiratory surrogate signal derived from the external surface displacement. The developed technique was assessed on a dataset of seven lung cancer patients, who underwent two repeated 4D CT scans. The first 4D CT was used to build the respiratory motion model, which was tested on the second scan. The geometric accuracy in localizing lung lesions, mediated over all breathing phases, ranged between 0.6 and 1.7 mm across all patients. Errors in tracking the surrounding organs at risk, such as lungs, trachea and esophagus, were lower than 1.3 mm on average. The median absolute variation in water equivalent path length (WEL) within the target volume did not exceed 1.9 mm-WEL for simulated particle beams. A significant improvement was achieved compared with error compensation based on standard rigid alignment. The present work can be regarded as a feasibility study for the potential extension of tumor tracking techniques in particle treatments. Differently from current tracking methods applied in conventional radiotherapy, the proposed approach allows for the dynamic localization of all anatomical structures scanned in the planning CT, thus providing complete information on density and WEL variations required for particle beam range adaptation.
Numerically Modeling the Erosion of Lunar Soil by Rocket Exhaust Plumes
NASA Technical Reports Server (NTRS)
2008-01-01
In preparation for the Apollo program, Leonard Roberts of the NASA Langley Research Center developed a remarkable analytical theory that predicts the blowing of lunar soil and dust beneath a rocket exhaust plume. Roberts assumed that the erosion rate was determined by the excess shear stress in the gas (the amount of shear stress greater than what causes grains to roll). The acceleration of particles to their final velocity in the gas consumes a portion of the shear stress. The erosion rate continues to increase until the excess shear stress is exactly consumed, thus determining the erosion rate. Roberts calculated the largest and smallest particles that could be eroded based on forces at the particle scale, but the erosion rate equation assumed that only one particle size existed in the soil. He assumed that particle ejection angles were determined entirely by the shape of the terrain, which acts like a ballistic ramp, with the particle aerodynamics being negligible. The predicted erosion rate and the upper limit of particle size appeared to be within an order of magnitude of small-scale terrestrial experiments but could not be tested more quantitatively at the time. The lower limit of particle size and the predictions of ejection angle were not tested. We observed in the Apollo landing videos that the ejection angles of particles streaming out from individual craters were time-varying and correlated to the Lunar Module thrust, thus implying that particle aerodynamics dominate. We modified Roberts theory in two ways. First, we used ad hoc the ejection angles measured in the Apollo landing videos, in lieu of developing a more sophisticated method. Second, we integrated Roberts equations over the lunar-particle size distribution and obtained a compact expression that could be implemented in a numerical code. We also added a material damage model that predicts the number and size of divots which the impinging particles will cause in hardware surrounding the landing rocket. Then, we performed a long-range ballistics analysis for the ejected particulates.
Application and Analysis of Measurement Model for Calibrating Spatial Shear Surface in Triaxial Test
NASA Astrophysics Data System (ADS)
Zhang, Zhihua; Qiu, Hongsheng; Zhang, Xiedong; Zhang, Hang
2017-12-01
Discrete element method has great advantages in simulating the contacts, fractures, large displacement and deformation between particles. In order to analyze the spatial distribution of the shear surface in the three-dimensional triaxial test, a measurement model is inserted in the numerical triaxial model which is generated by weighted average assembling method. Due to the non-visibility of internal shear surface in laboratory, it is largely insufficient to judge the trend of internal shear surface only based on the superficial cracks of sheared sample, therefore, the measurement model is introduced. The trend of the internal shear zone is analyzed according to the variations of porosity, coordination number and volumetric strain in each layer. It shows that as a case study on confining stress of 0.8 MPa, the spatial shear surface is calibrated with the results of the rotated particle distribution and the theoretical value with the specific characteristics of the increase of porosity, the decrease of coordination number, and the increase of volumetric strain, which represents the measurement model used in three-dimensional model is applicable.
Newman, Roger H.; Hill, Stefan J.; Harris, Philip J.
2013-01-01
A synchrotron wide-angle x-ray scattering study of mung bean (Vigna radiata) primary cell walls was combined with published solid-state nuclear magnetic resonance data to test models for packing of (1→4)-β-glucan chains in cellulose microfibrils. Computer-simulated peak shapes, calculated for 36-chain microfibrils with perfect order or uncorrelated disorder, were sharper than those in the experimental diffractogram. Introducing correlated disorder into the models broaden the simulated peaks but only when the disorder was increased to unrealistic magnitudes. Computer-simulated diffractograms, calculated for 24- and 18-chain models, showed good fits to experimental data. Particularly good fits to both x-ray and nuclear magnetic resonance data were obtained for collections of 18-chain models with mixed cross-sectional shapes and occasional twinning. Synthesis of 18-chain microfibrils is consistent with a model for cellulose-synthesizing complexes in which three cellulose synthase polypeptides form a particle and six particles form a rosette. PMID:24154621
NASA Astrophysics Data System (ADS)
Ohno, Kazumasa; Okuzumi, Satoshi
2017-02-01
A number of transiting exoplanets have featureless transmission spectra that might suggest the presence of clouds at high altitudes. A realistic cloud model is necessary to understand the atmospheric conditions under which such high-altitude clouds can form. In this study, we present a new cloud model that takes into account the microphysics of both condensation and coalescence. Our model provides the vertical profiles of the size and density of cloud and rain particles in an updraft for a given set of physical parameters, including the updraft velocity and the number density of cloud condensation nuclei (CCNs). We test our model by comparing with observations of trade-wind cumuli on Earth and ammonia ice clouds in Jupiter. For trade-wind cumuli, the model including both condensation and coalescence gives predictions that are consistent with observations, while the model including only condensation overestimates the mass density of cloud droplets by up to an order of magnitude. For Jovian ammonia clouds, the condensation-coalescence model simultaneously reproduces the effective particle radius, cloud optical thickness, and cloud geometric thickness inferred from Voyager observations if the updraft velocity and CCN number density are taken to be consistent with the results of moist convection simulations and Galileo probe measurements, respectively. These results suggest that the coalescence of condensate particles is important not only in terrestrial water clouds but also in Jovian ice clouds. Our model will be useful to understand how the dynamics, compositions, and nucleation processes in exoplanetary atmospheres affect the vertical extent and optical thickness of exoplanetary clouds via cloud microphysics.
NASA Astrophysics Data System (ADS)
Broll, J. M.; Fuselier, S. A.; Trattner, K. J.; Steven, P. M.; Burch, J. L.; Giles, B. L.
2017-12-01
Magnetic reconnection at Earth's dayside magnetopause is an essential process in magnetospheric physics. Under southward IMF conditions, reconnection occurs along a thin ribbon across the dayside magnetopause. The location of this ribbon has been studied extensively in terms of global optimization of quantities like reconnecting field energy or magnetic shear, but with expected errors of 1-2 Earth radii these global models give limited context for cases where an observation is near the reconnection line. Building on previous results, which established the cutoff contour method for locating reconnection using in-situ velocity measurements, we examine the effects of MHD-scale waves on reconnection exhaust distributions. We use a test particle exhaust distribution propagated through a globamagnetohydrodynamics model fields and compare with Magnetospheric Multiscale observations of reconnection exhaust.
Study of EPDM/PP polymeric blends: mechanical behavior and effects of compatibilization
NASA Astrophysics Data System (ADS)
Bouchart, Vanessa; Bhatnagar, N.; Brieu, Mathias; Ghosh, A. K.; Kondo, Djimedo
2008-09-01
A blend of Ethylene Propylene Diene Monomer (EPDM) rubber reinforced by polypropylene (PP) particles has been processed and its hyperelastic behavior has been characterized under cyclic uni-axial tensile tests. The experimental results show a significant effect of the fraction of polypropylene particles (10%, 25% and 30% by weight). Moreover, from another series of tests conducted on materials containing compatibilizers at different mass concentration, it is observed that the introduction of a compatibilizer increases the rigidity of the blends and affects notably their macroscopic behavior. These observations are interpreted as a consequence of the modification at microlevel of adherence between particles and matrix phases. The use of a nonlinear micromechanical model allows us to confirm this interpretation. To cite this article: V. Bouchart et al., C. R. Mecanique 336 (2008).
Binary Colloidal Alloy Test-3 and 4: Critical Point
NASA Technical Reports Server (NTRS)
Weitz, David A.; Lu, Peter J.
2007-01-01
Binary Colloidal Alloy Test - 3 and 4: Critical Point (BCAT-3-4-CP) will determine phase separation rates and add needed points to the phase diagram of a model critical fluid system. Crewmembers photograph samples of polymer and colloidal particles (tiny nanoscale spheres suspended in liquid) that model liquid/gas phase changes. Results will help scientists develop fundamental physics concepts previously cloaked by the effects of gravity.
Binodal Colloidal Aggregation Test - 4: Polydispersion
NASA Technical Reports Server (NTRS)
Chaikin, Paul M.
2008-01-01
Binodal Colloidal Aggregation Test - 4: Polydispersion (BCAT-4-Poly) will use model hard-spheres to explore seeded colloidal crystal nucleation and the effects of polydispersity, providing insight into how nature brings order out of disorder. Crewmembers photograph samples of polymer and colloidal particles (tiny nanoscale spheres suspended in liquid) that model liquid/gas phase changes. Results will help scientists develop fundamental physics concepts previously cloaked by the effects of gravity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleeman, M.J.; Schauer, J.J.; Cass, G.R.
A dilution source sampling system is augmented to measure the size-distributed chemical composition of fine particle emissions from air pollution sources. Measurements are made using a laser optical particle counter (OPC), a differential mobility analyzer/condensation nucleus counter (DMA/CNC) combination, and a pair of microorifice uniform deposit impactors (MOUDIs). The sources tested with this system include wood smoke (pine, oak, eucalyptus), meat charbroiling, and cigarettes. The particle mass distributions from all wood smoke sources have a single mode that peaks at approximately 0.1--0.2 {micro}m particle diameter. The smoke from meat charbroiling shows a major peak in the particle mass distribution atmore » 0.1--0.2 {micro}m particle diameter, with some material present at larger particle sizes. Particle mass distributions from cigarettes peak between 0.3 and 0.4 {micro}m particle diameter. Chemical composition analysis reveals that particles emitted from the sources tested here are largely composed of organic compounds. Noticeable concentrations of elemental carbon are found in the particles emitted from wood burning. The size distributions of the trace species emissions from these sources also are presented, including data for Na, K, Ti, Fe, Br, Ru, Cl, Al, Zn, Ba, Sr, V, Mn, Sb, La, Ce, as well as sulfate, nitrate, and ammonium ion when present in statistically significant amounts. These data are intended for use with air quality models that seek to predict the size distribution of the chemical composition of atmospheric fine particles.« less
High Pressure Quick Disconnect Particle Impact Tests
NASA Technical Reports Server (NTRS)
Rosales, Keisa R.; Stoltzfus, Joel M.
2009-01-01
NASA Johnson Space Center White Sands Test Facility (WSTF) performed particle impact testing to determine whether there is a particle impact ignition hazard in the quick disconnects (QDs) in the Environmental Control and Life Support System (ECLSS) on the International Space Station (ISS). Testing included standard supersonic and subsonic particle impact tests on 15-5 PH stainless steel, as well as tests performed on a QD simulator. This paper summarizes the particle impact tests completed at WSTF. Although there was an ignition in Test Series 4, it was determined the ignition was caused by the presence of a machining imperfection. The sum of all the test results indicates that there is no particle impact ignition hazard in the ISS ECLSS QDs. KEYWORDS: quick disconnect, high pressure, particle impact testing, stainless steel
NASA Technical Reports Server (NTRS)
Rule, W. K.; Hayashida, K. B.
1992-01-01
The development of a computer program to predict the degradation of the insulating capabilities of the multilayer insulation (MLI) blanket of Space Station Freedom due to a hypervelocity impact with a space debris particle is described. A finite difference scheme is used for the calculations. The computer program was written in Microsoft BASIC. Also described is a test program that was undertaken to validate the numerical model. Twelve MLI specimens were impacted at hypervelocities with simulated debris particles using a light gas gun at Marshall Space Flight Center. The impact-damaged MLI specimens were then tested for insulating capability in the space environment of the Sunspot thermal vacuum chamber at MSFC. Two undamaged MLI specimens were also tested for comparison with the test results of the damaged specimens. The numerical model was found to adequately predict behavior of the MLI specimens in the Sunspot chamber. A parameter, called diameter ratio, was developed to relate the nominal MLI impact damage to the apparent (for thermal analysis purposes) impact damage based on the hypervelocity impact conditions of a specimen.
Testing naturalness at 100 TeV
NASA Astrophysics Data System (ADS)
Chen, Chuan-Ren; Hajer, Jan; Liu, Tao; Low, Ian; Zhang, Hao
2017-09-01
Solutions to the electroweak hierarchy problem typically introduce a new symmetry to stabilize the quadratic ultraviolet sensitivity in the self-energy of the Higgs boson. The new symmetry is either broken softly or collectively, as for example in supersymmetric and little Higgs theories. At low energies such theories contain naturalness partners of the Standard Model fields which are responsible for canceling the quadratic divergence in the squared Higgs mass. Post the discovery of any partner-like particles, we propose to test the aforementioned cancellation by measuring relevant Higgs couplings. Using the fermionic top partners in little Higgs theories as an illustration, we construct a simplified model for naturalness and initiate a study on testing naturalness. After electroweak symmetry breaking, naturalness in the top sector requires a T = - λ t 2 at leading order, where λ t and a T are the Higgs couplings to a pair of top quarks and top partners, respectively. Using a multivariate method of Boosted Decision Tree to tag boosted particles in the Standard Model, we show that, with a luminosity of 30 ab-1 at a 100 TeV pp-collider, naturalness could be tested with a precision of 10% for a top partner mass up to 2.5 TeV.
NASA Technical Reports Server (NTRS)
Chakrabarti, S.; Martin, J. J.; Pearson, J. B.; Lewis, R. A.
2003-01-01
The NASA MSFC Propulsion Research Center (PRC) is conducting a research activity examining the storage of low energy antiprotons. The High Performance Antiproton Trap (HiPAT) is an electromagnetic system (Penning-Malmberg design) consisting of a 4 Tesla superconductor, a high voltage confinement electrode system, and an ultra high vacuum test section; designed with an ultimate goal of maintaining charged particles with a half-life of 18 days. Currently, this system is being experimentally evaluated using normal matter ions which are cheap to produce and relatively easy to handle and provide a good indication of overall trap behavior, with the exception of assessing annihilation losses. Computational particle-in-cell plasma modeling using the XOOPIC code is supplementing the experiments. Differing electrode voltage configurations are employed to contain charged particles, typically using flat, modified flat and harmonic potential wells. Ion cloud oscillation frequencies are obtained experimentally by amplification of signals induced on the electrodes by the particle motions. XOOPIC simulations show that for given electrode voltage configurations, the calculated charged particle oscillation frequencies are close to experimental measurements. As a two-dimensional axisymmetric code, XOOPIC cannot model azimuthal plasma variations, such as those induced by radio-frequency (RF) modulation of the central quadrupole electrode in experiments designed to enhance ion cloud containment. However, XOOPIC can model analytically varying electric potential boundary conditions and particle velocity initial conditions. Application of these conditions produces ion cloud axial and radial oscillation frequency modes of interest in achieving the goal of optimizing HiPAT for reliable containment of antiprotons.
Modeling the superstorm in November 2003
NASA Astrophysics Data System (ADS)
Fok, Mei-Ching; Moore, Thomas E.; Slinker, Steve P.; Fedder, Joel A.; Delcourt, Dominique C.; Nosé, Masahito; Chen, Sheng-Hsien
2011-01-01
The superstorm on 20-21 November 2003 was the largest geomagnetic storm in solar cycle 23 as measured by Dst, which attained a minimum value of -422 nT. We have simulated this storm to understand how particles originating from the solar wind and ionosphere get access to the magnetosphere and how the subsequent transport and energization processes contribute to the buildup of the ring current. The global electromagnetic configuration and the solar wind H+ distribution are specified by the Lyon-Fedder-Mobarry (LFM) magnetohydrodynamics model. The outflow of H+ and O+ ions from the ionosphere are also considered. Their trajectories in the magnetosphere are followed by a test-particle code. The particle distributions at the inner plasma sheet established by the LFM model and test-particle calculations are then used as boundary conditions for a ring current model. Our simulations reproduce the rapid decrease of Dst during the storm main phase and the fast initial phase of recovery. Shielding in the inner magnetosphere is established at early main phase. This shielding field lasts several hours and then breaks down at late main phase. At the peak of the storm, strong penetration of ions earthward to L shell of 1.5 is revealed in the simulation. It is surprising that O+ is significant but not the dominant species in the ring current in our calculation for this major storm. It is very likely that substorm effects are not well represented in the models and O+ energization is underestimated. Ring current simulation with O+ energy density at the boundary set comparable to Geotail observations produces excellent agreement with the observed symH. As expected in superstorms, ring current O+ is the dominant species over H+ during the main to midrecovery phase of the storm.
INITIAL ANALYSIS OF TRANSIENT POWER TIME LAG DUE TO HETEROGENEITY WITHIN THE TREAT FUEL MATRIX.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D.M. Wachs; A.X. Zabriskie, W.R. Marcum
2014-06-01
The topic Nuclear Safety encompasses a broad spectrum of focal areas within the nuclear industry; one specific aspect centers on the performance and integrity of nuclear fuel during a reactivity insertion accident (RIA). This specific accident has proven to be fundamentally difficult to theoretically characterize due to the numerous empirically driven characteristics that quantify the fuel and reactor performance. The Transient Reactor Test (TREAT) facility was designed and operated to better understand fuel behavior under extreme (i.e. accident) conditions; it was shutdown in 1994. Recently, efforts have been underway to commission the TREAT facility to continue testing of advanced accidentmore » tolerant fuels (i.e. recently developed fuel concepts). To aid in the restart effort, new simulation tools are being used to investigate the behavior of nuclear fuels during facility’s transient events. This study focuses specifically on the characterizing modeled effects of fuel particles within the fuel matrix of the TREAT. The objective of this study was to (1) identify the impact of modeled heterogeneity within the fuel matrix during a transient event, and (2) demonstrate acceptable modeling processes for the purpose of TREAT safety analyses, specific to fuel matrix and particle size. Hypothetically, a fuel that is dominantly heterogeneous will demonstrate a clearly different temporal heating response to that of a modeled homogeneous fuel. This time difference is a result of the uniqueness of the thermal diffusivity within the fuel particle and fuel matrix. Using MOOSE/BISON to simulate the temperature time-lag effect of fuel particle diameter during a transient event, a comparison of the average graphite moderator temperature surrounding a spherical particle of fuel was made for both types of fuel simulations. This comparison showed that at a given time and with a specific fuel particle diameter, the fuel particle (heterogeneous) simulation and the homogeneous simulation were related by a multiplier relative to the average moderator temperature. As time increases the multiplier is comparable to the factor found in a previous analytical study from literature. The implementation of this multiplier and the method of analysis may be employed to remove assumptions and increase fidelity for future research on the effect of fuel particles during transient events.« less
Project Physics Tests 6, The Nucleus.
ERIC Educational Resources Information Center
Harvard Univ., Cambridge, MA. Harvard Project Physics.
Test items relating to Project Physics Unit 6 are presented in this booklet. Included are 70 multiple-choice and 24 problem-and-essay questions. Nuclear physics fundamentals are examined with respect to the shell model, isotopes, neutrons, protons, nuclides, charge-to-mass ratios, alpha particles, Becquerel's discovery, gamma rays, cyclotrons,…
Currently, little justification is provided for nanomaterial testing concentrations in in vitro assays. The in vitro concentrations typically used may be higher than those experienced in exposed humans. Selection of concentration levels for hazard evaluation based on real-world ...
Wear model simulating clinical abrasion on composite filling materials.
Johnsen, Gaute Floer; Taxt-Lamolle, Sébastien F; Haugen, Håvard J
2011-01-01
The aim of this study was to establish a wear model for testing composite filling materials with abrasion properties closer to a clinical situation. In addition, the model was used to evaluate the effect of filler volume and particle size on surface roughness and wear resistance. Each incisor tooth was prepared with nine identical standardized cavities with respect to depth, diameter, and angle. Generic composite of 3 different filler volumes and 3 different particle sizes held together with the same resin were randomly filled in respective cavities. A multidirectional wet-grinder with molar cusps as antagonist wore the surface of the incisors containing the composite fillings in a bath of human saliva at a constant temperature of 37°C. The present study suggests that the most wear resistant filling materials should consist of medium filling content (75%) and that particles size is not as critical as earlier reported.
NASA Astrophysics Data System (ADS)
Wilkins, C.; Bingley, L.; Angelopoulos, V.; Caron, R.; Cruce, P. R.; Chung, M.; Rowe, K.; Runov, A.; Liu, J.; Tsai, E.
2017-12-01
UCLA's Electron Losses and Fields Investigation (ELFIN) is a 3U+ CubeSat mission designed to study relativistic particle precipitation in Earth's polar regions from Low Earth Orbit. Upon its 2018 launch, ELFIN will aim to address an important open question in Space Physics: Are Electromagnetic Ion-Cyclotron (EMIC) waves the dominant source of pitch-angle scattering of high-energy radiation belt charged particles into Earth's atmosphere during storms and substorms? Previous studies have indicated these scattering events occur frequently during storms and substorms, and ELFIN will be the first mission to study this process in-situ.Paramount to ELFIN's success is its instrument suite consisting of an Energetic Particle Detector (EPD) and a Fluxgate Magnetometer (FGM). The EPD is comprised of two collimated solid-state detector stacks which will measure the incident flux of energetic electrons from 50 keV to 4 MeV and ions from 50 keV to 300 keV. The FGM is a 3-axis magnetic field sensor which will capture the local magnetic field and its variations at frequencies up to 5 Hz. The ELFIN spacecraft spins perpendicular to the geomagnetic field to provide 16 pitch-angle particle data sectors per revolution. Together these factors provide the capability to address the nature of radiation belt particle precipitation by pitch-angle scattering during storms and substorms.ELFIN's instrument development has progressed into the late Engineering Model (EM) phase and will soon enter Flight Model (FM) development. The instrument suite is currently being tested and calibrated at UCLA using a variety of methods including the use of radioactive sources and applied magnetics to simulate orbit conditions during spin sectoring. We present the methods and test results from instrument calibration and performance validation.
NASA Astrophysics Data System (ADS)
Ovaysi, S.; Piri, M.
2009-12-01
We present a three-dimensional fully dynamic parallel particle-based model for direct pore-level simulation of incompressible viscous fluid flow in disordered porous media. The model was developed from scratch and is capable of simulating flow directly in three-dimensional high-resolution microtomography images of naturally occurring or man-made porous systems. It reads the images as input where the position of the solid walls are given. The entire medium, i.e., solid and fluid, is then discretized using particles. The model is based on Moving Particle Semi-implicit (MPS) technique. We modify this technique in order to improve its stability. The model handles highly irregular fluid-solid boundaries effectively. It takes into account viscous pressure drop in addition to the gravity forces. It conserves mass and can automatically detect any false connectivity with fluid particles in the neighboring pores and throats. It includes a sophisticated algorithm to automatically split and merge particles to maintain hydraulic connectivity of extremely narrow conduits. Furthermore, it uses novel methods to handle particle inconsistencies and open boundaries. To handle the computational load, we present a fully parallel version of the model that runs on distributed memory computer clusters and exhibits excellent scalability. The model is used to simulate unsteady-state flow problems under different conditions starting from straight noncircular capillary tubes with different cross-sectional shapes, i.e., circular/elliptical, square/rectangular and triangular cross-sections. We compare the predicted dimensionless hydraulic conductances with the data available in the literature and observe an excellent agreement. We then test the scalability of our parallel model with two samples of an artificial sandstone, samples A and B, with different volumes and different distributions (non-uniform and uniform) of solid particles among the processors. An excellent linear scalability is obtained for sample B that has more uniform distribution of solid particles leading to a superior load balancing. The model is then used to simulate fluid flow directly in REV size three-dimensional x-ray images of a naturally occurring sandstone. We analyze the quality and consistency of the predicted flow behavior and calculate absolute permeability, which compares well with the available network modeling and Lattice-Boltzmann permeabilities available in the literature for the same sandstone. We show that the model conserves mass very well and is stable computationally even at very narrow fluid conduits. The transient- and the steady-state fluid flow patterns are presented as well as the steady-state flow rates to compute absolute permeability. Furthermore, we discuss the vital role of our adaptive particle resolution scheme in preserving the original pore connectivity of the samples and their narrow channels through splitting and merging of fluid particles.
Exact Hybrid Particle/Population Simulation of Rule-Based Models of Biochemical Systems
Stover, Lori J.; Nair, Niketh S.; Faeder, James R.
2014-01-01
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This “network-free” approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of “partial network expansion” into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility. PMID:24699269
Exact hybrid particle/population simulation of rule-based models of biochemical systems.
Hogg, Justin S; Harris, Leonard A; Stover, Lori J; Nair, Niketh S; Faeder, James R
2014-04-01
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility.
NASA Astrophysics Data System (ADS)
Reyes López, Yaidel; Roose, Dirk; Recarey Morfa, Carlos
2013-05-01
In this paper, we present a dynamic refinement algorithm for the smoothed particle Hydrodynamics (SPH) method. An SPH particle is refined by replacing it with smaller daughter particles, which positions are calculated by using a square pattern centered at the position of the refined particle. We determine both the optimal separation and the smoothing distance of the new particles such that the error produced by the refinement in the gradient of the kernel is small and possible numerical instabilities are reduced. We implemented the dynamic refinement procedure into two different models: one for free surface flows, and one for post-failure flow of non-cohesive soil. The results obtained for the test problems indicate that using the dynamic refinement procedure provides a good trade-off between the accuracy and the cost of the simulations.
Advanced ice protection systems test in the NASA Lewis icing research tunnel
NASA Technical Reports Server (NTRS)
Bond, Thomas H.; Shin, Jaiwon; Mesander, Geert A.
1991-01-01
Tests of eight different deicing systems based on variations of three different technologies were conducted in the NASA Lewis Research Center Icing Research Tunnel (IRT) in June and July 1990. The systems used pneumatic, eddy current repulsive, and electro-expulsive means to shed ice. The tests were conducted on a 1.83 m span, 0.53 m chord NACA 0012 airfoil operated at a 4 degree angle of attack. The models were tested at two temperatures: a glaze condition at minus 3.9 C and a rime condition at minus 17.2 C. The systems were tested through a range of icing spray times and cycling rates. Characterization of the deicers was accomplished by monitoring power consumption, ice shed particle size, and residual ice. High speed video motion analysis was performed to quantify ice particle size.
McMullin, Brian T; Leung, Ming-Ying; Shanbhag, Arun S; McNulty, Donald; Mabrey, Jay D; Agrawal, C Mauli
2006-02-01
A total of 750 images of individual ultra-high molecular weight polyethylene (UHMWPE) particles isolated from periprosthetic failed hip, knee, and shoulder arthroplasties were extracted from archival scanning electron micrographs. Particle size and morphology was subsequently analyzed using computerized image analysis software utilizing five descriptors found in ASTM F1877-98, a standard for quantitative description of wear debris. An online survey application was developed to display particle images, and allowed ten respondents to classify particle morphologies according to commonly used terminology as fibers, flakes, or granules. Particles were categorized based on a simple majority of responses. All descriptors were evaluated using a one-way ANOVA and Tukey-Kramer test for all-pairs comparison among each class of particles. A logistic regression model using half of the particles included in the survey was then used to develop a mathematical scheme to predict whether a given particle should be classified as a fiber, flake, or granule based on its quantitative measurements. The validity of the model was then assessed using the other half of the survey particles and compared with human responses. Comparison of the quantitative measurements of isolated particles showed that the morphologies of each particle type classified by respondents were statistically different from one another (p<0.05). The average agreement between mathematical prediction and human respondents was 83.5% (standard error 0.16%). These data suggest that computerized descriptors can be feasibly correlated with subjective terminology, thus providing a basis for a common vocabulary for particle description which can be translated into quantitative dimensions.
McMullin, Brian T.; Leung, Ming-Ying; Shanbhag, Arun S.; McNulty, Donald; Mabrey, Jay D.; Agrawal, C. Mauli
2014-01-01
A total of 750 images of individual ultra-high molecular weight polyethylene (UHMWPE) particles isolated from periprosthetic failed hip, knee, and shoulder arthroplasties were extracted from archival scanning electron micrographs. Particle size and morphology was subsequently analyzed using computerized image analysis software utilizing five descriptors found in ASTM F1877-98, a standard for quantitative description of wear debris. An online survey application was developed to display particle images, and allowed ten respondents to classify particle morphologies according to commonly used terminology as fibers, flakes, or granules. Particles were categorized based on a simple majority of responses. All descriptors were evaluated using a one-way ANOVA and Tukey–Kramer test for all-pairs comparison among each class of particles. A logistic regression model using half of the particles included in the survey was then used to develop a mathematical scheme to predict whether a given particle should be classified as a fiber, flake, or granule based on its quantitative measurements. The validity of the model was then assessed using the other half of the survey particles and compared with human responses. Comparison of the quantitative measurements of isolated particles showed that the morphologies of each particle type classified by respondents were statistically different from one another (po0:05). The average agreement between mathematical prediction and human respondents was 83.5% (standard error 0.16%). These data suggest that computerized descriptors can be feasibly correlated with subjective terminology, thus providing a basis for a common vocabulary for particle description which can be translated into quantitative dimensions. PMID:16112725
Mass transfer effect of the stalk contraction-relaxation cycle of Vorticella convallaria
NASA Astrophysics Data System (ADS)
Zhou, Jiazhong; Admiraal, David; Ryu, Sangjin
2014-11-01
Vorticella convallaria is a genus of protozoa living in freshwater. Its stalk contracts and coil pulling the cell body towards the substrate at a remarkable speed, and then relaxes to its extended state much more slowly than the contraction. However, the reason for Vorticella's stalk contraction is still unknown. It is presumed that water flow induced by the stalk contraction-relaxation cycle may augment mass transfer near the substrate. We investigated this hypothesis using an experimental model with particle tracking velocimetry and a computational fluid dynamics model. In both approaches, Vorticella was modeled as a solid sphere translating perpendicular to a solid surface in water. After having been validated by the experimental model and verified by grid convergence index test, the computational model simulated water flow during the cycle based on the measured time course of stalk length changes of Vorticella. Based on the simulated flow field, we calculated trajectories of particles near the model Vorticella, and then evaluated the mass transfer effect of Vorticella's stalk contraction based on the particles' motion. We acknowlege support from Laymann Seed Grant of the University of Nebraska-Lincoln.
NASA Astrophysics Data System (ADS)
Ragusa, Jorge Alejandro
Tuberculosis, a highly contagious disease, ranks as the second leading cause of death from an infectious disease, and remains a major global health problem. In 2013, 9 million new cases were diagnosed and 1.5 million people died worldwide from tuberculosis. This dissertation aims at developing a new, ultrafine particle-based efficient antibiotic delivery system for the treatment of tuberculosis. The carrier material to make the rifampicin (RIF)-loaded particles is a low molecular weight star-shaped polymer produced from glucosamine (molecular core building unit) and L-lactide (GluN-LLA). Stable particles with a very high 50% drug loading capacity were made via electrohydrodynamic atomization. Prolonged release (>14 days) of RIF from these particles is demonstrated. Drug release data fits the Korsmeyer-Peppas equation, which suggests the occurrence of a modified diffusion-controlled RIF release mechanism, and is also supported by differential scanning calorimetry and drug leaching tests. Cytotoxicity tests on Mycobacterium smegmatis showed that antibiotic-free GluN-LLA and polylactides (PLA) (reference material) particles did not show any significant anti-bacterial activity. The minimum inhibitory concentration and minimum bactericidal concentration values obtained for RIF-loaded particles showed 2- to 4-fold improvements in the anti-bacterial activity relative to the free drug. Cytotoxicity tests on macrophages indicated an increment in cell death as particle dose increased, but was not significantly affected by material type or particle size. Confocal microscopy was used to track internalization and localization of particles in the macrophages. GluN-LLA particles led to higher uptakes than the PLA particles. In addition, after phagocytosis, the GluN-LLA particles stayed in the cytoplasm and the particles showed a favorable long term drug release effect in killing intracellular bacteria compared to free RIF. The studies presented and discussed in this dissertation suggest that these drug carrier materials are potentially very attractive candidates for the development of high-payload, sustained-release antibiotic/resorbable polymer particle systems for treating bacterial lung infections.
Forces on stationary particles in near-bed turbulent flows
NASA Astrophysics Data System (ADS)
Schmeeckle, Mark W.; Nelson, Jonathan M.; Shreve, Ronald L.
2007-06-01
In natural flows, bed sediment particles are entrained and moved by the fluctuating forces, such as lift and drag, exerted by the overlying flow on the particles. To develop a better understanding of these forces and the relation of the forces to the local flow, the downstream and vertical components of force on near-bed fixed particles and of fluid velocity above or in front of them were measured synchronously at turbulence-resolving frequencies (200 or 500 Hz) in a laboratory flume. Measurements were made for a spherical test particle fixed at various heights above a smooth bed, above a smooth bed downstream of a downstream-facing step, and in a gravel bed of similarly sized particles as well as for a cubical test particle and 7 natural particles above a smooth bed. Horizontal force was well correlated with downstream velocity and not correlated with vertical velocity or vertical momentum flux. The standard drag formula worked well to predict the horizontal force, but the required value of the drag coefficient was significantly higher than generally used to model bed load motion. For the spheres, cubes, and natural particles, average drag coefficients were found to be 0.76, 1.36, and 0.91, respectively. For comparison, the drag coefficient for a sphere settling in still water at similar particle Reynolds numbers is only about 0.4. The variability of the horizontal force relative to its mean was strongly increased by the presence of the step and the gravel bed. Peak deviations were about 30% of the mean force for the sphere over the smooth bed, about twice the mean with the step, and 4 times it for the sphere protruding roughly half its diameter above the gravel bed. Vertical force correlated poorly with downstream velocity, vertical velocity, and vertical momentum flux whether measured over or ahead of the test particle. Typical formulas for shear-induced lift based on Bernoulli's principle poorly predict the vertical forces on near-bed particles. The measurements suggest that particle-scale pressure variations associated with turbulence are significant in the particle momentum balance.
Forces on stationary particles in near-bed turbulent flows
Schmeeckle, M.W.; Nelson, J.M.; Shreve, R.L.
2007-01-01
In natural flows, bed sediment particles are entrained and moved by the fluctuating forces, such as lift and drag, exerted by the overlying flow on the particles. To develop a better understanding of these forces and the relation of the forces to the local flow, the downstream and vertical components of force on near-bed fixed particles and of fluid velocity above or in front of them were measured synchronously at turbulence-resolving frequencies (200 or 500 Hz) in a laboratory flume. Measurements were made for a spherical test particle fixed at various heights above a smooth bed, above a smooth bed downstream of a downstream-facing step, and in a gravel bed of similarly sized particles as well as for a cubical test particle and 7 natural particles above a smooth bed. Horizontal force was well correlated with downstream velocity and not correlated with vertical velocity or vertical momentum flux. The standard drag formula worked well to predict the horizontal force, but the required value of the drag coefficient was significantly higher than generally used to model bed load motion. For the spheres, cubes, and natural particles, average drag coefficients were found to be 0.76, 1.36, and 0.91, respectively. For comparison, the drag coefficient for a sphere settling in still water at similar particle Reynolds numbers is only about 0.4. The variability of the horizontal force relative to its mean was strongly increased by the presence of the step and the gravel bed. Peak deviations were about 30% of the mean force for the sphere over the smooth bed, about twice the mean with the step, and 4 times it for the sphere protruding roughly half its diameter above the gravel bed. Vertical force correlated poorly with downstream velocity, vertical velocity, and vertical momentum flux whether measured over or ahead of the test particle. Typical formulas for shear-induced lift based on Bernoulli's principle poorly predict the vertical forces on near-bed particles. The measurements suggest that particle-scale pressure variations associated with turbulence are significant in the particle momentum balance. Copyright 2007 by the American Geophysical Union.
Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing
NASA Astrophysics Data System (ADS)
Watanabe, T.; Nagata, K.
2016-08-01
We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting a value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES-LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.
Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, T., E-mail: watanabe.tomoaki@c.nagoya-u.jp; Nagata, K.
We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting amore » value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES–LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.« less
Yamaguchi, Satoshi; Inoue, Sayuri; Sakai, Takahiko; Abe, Tomohiro; Kitagawa, Haruaki; Imazato, Satoshi
2017-05-01
The objective of this study was to assess the effect of silica nano-filler particle diameters in a computer-aided design/manufacturing (CAD/CAM) composite resin (CR) block on physical properties at the multi-scale in silico. CAD/CAM CR blocks were modeled, consisting of silica nano-filler particles (20, 40, 60, 80, and 100 nm) and matrix (Bis-GMA/TEGDMA), with filler volume contents of 55.161%. Calculation of Young's moduli and Poisson's ratios for the block at macro-scale were analyzed by homogenization. Macro-scale CAD/CAM CR blocks (3 × 3 × 3 mm) were modeled and compressive strengths were defined when the fracture loads exceeded 6075 N. MPS values of the nano-scale models were compared by localization analysis. As the filler size decreased, Young's moduli and compressive strength increased, while Poisson's ratios and MPS decreased. All parameters were significantly correlated with the diameters of the filler particles (Pearson's correlation test, r = -0.949, 0.943, -0.951, 0.976, p < 0.05). The in silico multi-scale model established in this study demonstrates that the Young's moduli, Poisson's ratios, and compressive strengths of CAD/CAM CR blocks can be enhanced by loading silica nanofiller particles of smaller diameter. CAD/CAM CR blocks by using smaller silica nano-filler particles have a potential to increase fracture resistance.
NASA Technical Reports Server (NTRS)
Rodriguez, A.; Alpen, E. L.; Powers-Risius, P.
1992-01-01
This report presents data for survival of mouse intestinal crypt cells, mouse testes weight loss as an indicator of survival of spermatogonial stem cells, and survival of rat 9L spheroid cells after irradiation in the plateau region of unmodified particle beams ranging in mass from 4He to 139La. The LET values range from 1.6 to 953 keV/microns. These studies examine the RBE-LET relationship for two normal tissues and for an in vitro tissue model, multicellular spheroids. When the RBE values are plotted as a function of LET, the resulting curve is characterized by a region in which RBE increases with LET, a peak RBE at an LET value of 100 keV/microns, and a region of decreasing RBE at LETs greater than 100 keV/microns. Inactivation cross sections (sigma) for these three biological systems have been calculated from the exponential terminal slope of the dose-response relationship for each ion. For this determination the dose is expressed as particle fluence and the parameter sigma indicates effect per particle. A plot of sigma versus LET shows that the curve for testes weight loss is shifted to the left, indicating greater radiosensitivity at lower LETs than for crypt cell and spheroid cell survival. The curves for cross section versus LET for all three model systems show similar characteristics with a relatively linear portion below 100 keV/microns and a region of lessened slope in the LET range above 100 keV/microns for testes and spheroids. The data indicate that the effectiveness per particle increases as a function of LET and, to a limited extent, Z, at LET values greater than 100 keV/microns. Previously published results for spread Bragg peaks are also summarized, and they suggest that RBE is dependent on both the LET and the Z of the particle.
Forces on a segregating particle
NASA Astrophysics Data System (ADS)
Lueptow, Richard M.; Shankar, Adithya; Fry, Alexander M.; Ottino, Julio M.; Umbanhowar, Paul B.
2017-11-01
Size segregation in flowing granular materials is not well understood at the particle level. In this study, we perform a series of 3D Discrete Element Method (DEM) simulations to measure the segregation force on a single spherical test particle tethered to a spring in the vertical direction in a shearing bed of particles with gravity acting perpendicular to the shear. The test particle is the same size or larger than the bed particles. At equilibrium, the downward spring force and test particle weight are offset by the upward buoyancy-like force and a size ratio dependent force. We find that the buoyancy-like force depends on the bed particle density and the Voronoi volume occupied by the test particle. By changing the density of the test particle with the particle size ratio such that the buoyancy force matches the test particle weight, we show that the upward size segregation force is a quadratic function of the particle size ratio. Based on this, we report an expression for the net force on a single particle as the sum of a size ratio dependent force, a buoyancy-like force, and the weight of the particle. Supported by NSF Grant CBET-1511450 and the Procter and Gamble Company.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stephen Seong Lee
Fuel flow to individual burners is complicated and difficult to determine on coal fired boilers, since coal solids were transported in a gas suspension that is governed by the complex physics of two-phase flow. The objectives of the project were the measurements of suspended coal solids-flows in the simulated test conditions. Various extractive methods were performed manually and can give only a snapshot result of fuel distribution. In order to measure particle diameter & velocity, laser based phase-Doppler particle analyzer (PDPA) and particle image velocimetry (PIV) were carefully applied. Statistical methods were used to analyze particle characteristics to see whichmore » factors have significant effect. The transparent duct model was carefully designed and fabricated for the laser-based-instrumentation of solids-flow monitoring (LISM). The experiments were conducted with two different kinds of particles with four different particle diameters. The particle types were organic particles and saw dust particles with the diameter range of 75-150 micron, 150-250 micron, 250-355 micron and 355-425 micron. The densities of the particles were measured to see how the densities affected the test results. Also the experiment was conducted with humid particles and fog particles. To generate humid particles, the humidifier was used. A pipe was connected to the humidifier to lead the particle flow to the intersection of the laser beam. The test results of the particle diameter indicated that, the mean diameter of humid particles was between 6.1703 microns and 6.6947 microns when the humid particle flow was low. When the humid particle flow was high, the mean diameter was between 6.6728 microns and 7.1872 microns. The test results of the particle mean velocity indicated that the mean velocity was between 1.3394 m/sec and 1.4556 m/sec at low humid particle flow. When the humid particle flow was high, the mean velocity was between 1.5694 m/sec and 1.7856 m/sec. The Air Flow Module, TQ AF 17 and shell ondina oil were used to generate fog particles. After the oil was heated inside the fog generator, the blower was used to generate the fog. The fog flew along the pipe to the intersection of the laser beam. The mean diameter of the fog particles was 5.765 microns. Compared with the humid particle diameter, we observed that the mean diameter of the fog particles was smaller than the humid particles. The test results of particle mean velocity was about 3.76 m/sec. Compared with the mean velocity of the humid particles, we can observed the mean velocity of fog particles were greater than humid particles. The experiments were conducted with four different kinds of particles with five different particle diameters. The particle types were organic particles, coal particles, potato particles and wheat particles with the diameter range of 63-75 micron, less than 150 micron, 150-250 micron, 250-355 micron and 355-425 micron. To control the flow rate, the control gate of the particle dispensing hopper was adjusted to 1/16 open rate, 1/8 open rate and 1/4 open rate. The captured image range was 0 cm to 5 cm from the control gate, 5 cm to 10 cm from the control gate and 10 cm to 15 cm from the control gate. Some of these experiments were conducted under both open environment conditions and closed environment conditions. Thus these experiments had a total of five parameters which were type of particles, diameter of particles, flow rate, observation range, and environment conditions. The coal particles (diameter between 63 and 75 microns) tested under the closed environment condition had three factors that were considered as the affecting factors. They were open rate, observation range, and environment conditions. In this experiment, the interaction of open rate and observation range had a significant effect on the lower limit. On the upper limit, the open rate and environment conditions had a significant effect. In addition, the interaction of open rate and environment conditions had a significant effect. The coal particles tested (diameter between 63 and 75 microns) under open environment, two factors were that considered as the affecting factors. They were the open rate and observation ranges. In this experiment, there was no significant effect on the lower limit. On the upper limit, the observation range had a significant effect. In addition, the interaction of open rate and observation range had a significant effect for the source of variation with 95% of confidence based on analysis of variance (ANOVA) results.« less
Guo, Shuang; Qiu, Bai-Ling; Zhu, Chen-Qi; Yang, Ya-Ya Gao; Wu, Di; Liang, Qi-Hui; Han, Nan-Yin
2016-09-15
Gravitational field-flow fractionation (GrFFF) is a useful technique for separation and characterization for micrometer-sized particles. Elution behavior of micrometer-sized particles in GrFFF was researched in this study. Particles in GrFFF channel are subject to hydrodynamic lift forces (HLF), fluid inertial forces and gravity, which drive them to different velocities by carrier flow, resulting in a size-based separation. Effects of ionic strength, flow rate and viscosity as well as methanol were investigated using polystyrene latex beads as model particles. This study is devoted to experimental verification of the effect of every factor and their comprehensive function. All experiments were performed to show isolated influence of every variable factor. The orthogonal design test was used to evaluate various factors comprehensively. Results suggested that retention ratio of particles increases with increasing flow rate or the viscosity of carrier liquid by adjusting external forces acting on particles. In addition, retention ratio increases as ionic strength decreases because of decreased electrostatic repulsion between particles and channel accumulation wall. As far as methanol, there is no general trend due to the change of both density and viscosity. On the basis of orthogonal design test it was found that viscosity of carrier liquid plays a significant role in determining resolution of micrometer-sized particles in GrFFF. Copyright © 2016 Elsevier B.V. All rights reserved.
In situ recording of particle network formation in liquids by ion conductivity measurements.
Pfaffenhuber, Christian; Sörgel, Seniz; Weichert, Katja; Bele, Marjan; Mundinger, Tabea; Göbel, Marcus; Maier, Joachim
2011-09-21
The formation of fractal silica networks from a colloidal initial state was followed in situ by ion conductivity measurements. The underlying effect is a high interfacial lithium ion conductivity arising when silica particles are brought into contact with Li salt-containing liquid electrolytes. The experimental results were modeled using Monte Carlo simulations and tested using confocal fluorescence laser microscopy and ζ-potential measurements.
Design and validation of a passive deposition sampler.
Einstein, Stephanie A; Yu, Chang-Ho; Mainelis, Gediminas; Chen, Lung Chi; Weisel, Clifford P; Lioy, Paul J
2012-09-01
A new, passive particle deposition air sampler, called the Einstein-Lioy Deposition Sampler (ELDS), has been developed to fill a gap in passive sampling for near-field particle emissions. The sampler can be configured in several ways: with a protective hood for outdoor sampling, without a protective hood, and as a dust plate. In addition, there is an XRF-ready option that allows for direct sampling onto a filter-mounted XRF cartridge which can be used in conjunction with all configurations. A wind tunnel was designed and constructed to test the performance of different sampler configurations using a test dust with a known particle size distribution. The sampler configurations were also tested versus each other to evaluate whether or not the protective hood would affect the collected particle size distribution. A field study was conducted to test the sampler under actual environmental conditions and to evaluate its ability to collect samples for chemical analysis. Individual experiments for each configuration demonstrated precision of the sampler. The field experiment demonstrated the ability of the sampler to both collect mass and allow for the measurement of an environmental contaminant i.e. Cr(6+). The ELDS was demonstrated to be statistically not different for Hooded and Non-Hooded models, compared to each other and the test dust; thus, it can be used indoors and outdoors in a variety of configurations to suit the user's needs.
Ground-Based Aerosol Measurements | Science Inventory ...
Atmospheric particulate matter (PM) is a complex chemical mixture of liquid and solid particles suspended in air (Seinfeld and Pandis 2016). Measurements of this complex mixture form the basis of our knowledge regarding particle formation, source-receptor relationships, data to test and verify complex air quality models, and how PM impacts human health, visibility, global warming, and ecological systems (EPA 2009). Historically, PM samples have been collected on filters or other substrates with subsequent chemical analysis in the laboratory and this is still the major approach for routine networks (Chow 2005; Solomon et al. 2014) as well as in research studies. In this approach, air, at a specified flow rate and time period, is typically drawn through an inlet, usually a size selective inlet, and then drawn through filters, 1 INTRODUCTION Atmospheric particulate matter (PM) is a complex chemical mixture of liquid and solid particles suspended in air (Seinfeld and Pandis 2016). Measurements of this complex mixture form the basis of our knowledge regarding particle formation, source-receptor relationships, data to test and verify complex air quality models, and how PM impacts human health, visibility, global warming, and ecological systems (EPA 2009). Historically, PM samples have been collected on filters or other substrates with subsequent chemical analysis in the laboratory and this is still the major approach for routine networks (Chow 2005; Solomo
Numerical Study of Suspension Plasma Spraying
NASA Astrophysics Data System (ADS)
Farrokhpanah, Amirsaman; Coyle, Thomas W.; Mostaghimi, Javad
2017-01-01
A numerical study of suspension plasma spraying is presented in the current work. The liquid suspension jet is replaced with a train of droplets containing the suspension particles injected into the plasma flow. Atomization, evaporation, and melting of different components are considered for droplets and particles as they travel toward the substrate. Effect of different parameters on particle conditions during flight and upon impact on the substrate is investigated. Initially, influence of the torch operating conditions such as inlet flow rate and power is studied. Additionally, effect of injector parameters like injection location, flow rate, and angle is examined. The model used in the current study takes high-temperature gradients and non-continuum effects into account. Moreover, the important effect of change in physical properties of suspension droplets as a result of evaporation is included in the model. These mainly include variations in heat transfer properties and viscosity. Utilizing this improved model, several test cases have been considered to better evaluate the effect of different parameters on the quality of particles during flight and upon impact on the substrate.
Many-Body Localization and Quantum Nonergodicity in a Model with a Single-Particle Mobility Edge.
Li, Xiaopeng; Ganeshan, Sriram; Pixley, J H; Das Sarma, S
2015-10-30
We investigate many-body localization in the presence of a single-particle mobility edge. By considering an interacting deterministic model with an incommensurate potential in one dimension we find that the single-particle mobility edge in the noninteracting system leads to a many-body mobility edge in the corresponding interacting system for certain parameter regimes. Using exact diagonalization, we probe the mobility edge via energy resolved entanglement entropy (EE) and study the energy resolved applicability (or failure) of the eigenstate thermalization hypothesis (ETH). Our numerical results indicate that the transition separating area and volume law scaling of the EE does not coincide with the nonthermal to thermal transition. Consequently, there exists an extended nonergodic phase for an intermediate energy window where the many-body eigenstates violate the ETH while manifesting volume law EE scaling. We also establish that the model possesses an infinite temperature many-body localization transition despite the existence of a single-particle mobility edge. We propose a practical scheme to test our predictions in atomic optical lattice experiments which can directly probe the effects of the mobility edge.
NASA Astrophysics Data System (ADS)
Tiguercha, Djlalli; Bennis, Anne-claire; Ezersky, Alexander
2015-04-01
The elliptical motion in surface waves causes an oscillating motion of the sand grains leading to the formation of ripple patterns on the bottom. Investigation how the grains with different properties are distributed inside the ripples is a difficult task because of the segration of particle. The work of Fernandez et al. (2003) was extended from one-dimensional to two-dimensional case. A new numerical model, based on these non-linear diffusion equations, was developed to simulate the grain distribution inside the marine sand ripples. The one and two-dimensional models are validated on several test cases where segregation appears. Starting from an homogeneous mixture of grains, the two-dimensional simulations demonstrate different segregation patterns: a) formation of zones with high concentration of light and heavy particles, b) formation of «cat's eye» patterns, c) appearance of inverse Brazil nut effect. Comparisons of numerical results with the new set of field data and wave flume experiments show that the two-dimensional non-linear diffusion equations allow us to reproduce qualitatively experimental results on particles segregation.
NASA Astrophysics Data System (ADS)
O'Brien, Leela; Gruen, E.; Sternovsky, Z.; Horanyi, M.; Juhasz, A.; Eberhard, M.; Srama, R.
2013-10-01
The development of the Nano-Dust Analyzer (NDA) instrument and the results from the first laboratory testing and calibration are reported. The two STEREO spacecrafts have indicated that nanometer-sized dust particles, potentially with very high flux, are delivered to 1 AU from the inner solar system [Meyer-Vernet, N. et al., Solar Physics, 256, 463, 2009]. These particles are generated by collisional grinding or evaporation near the Sun and accelerated outward by the solar wind. The temporal variability reveals the complex interaction with the solar wind magnetic field within 1 AU and provides the means to learn about solar wind conditions and can supply additional parameters or verification for heliospheric magnetic field models. The composition analysis will report on the processes that generated the nanometer-sized particle. NDA is a highly sensitive dust analyzer that is developed under NASA's Heliophysics program. The instrument is a linear time-of-flight mass analyzer that utilizes dust impact ionization and is modeled after the Cosmic Dust Analyzer (CDA) on Cassini. By applying technologies implemented in solar wind instruments and coronagraphs, the highly sensitive dust analyzer will be able to be pointed toward the solar direction. A laboratory prototype has been built, tested, and calibrated at the dust accelerator facility at the University of Colorado, Boulder, using particles with 1 to over 50 km/s velocity. NDA is unique in its requirement to operate with the Sun in its field-of-view. A light trap system has been designed and optimized in terms of geometry and surface optical properties to mitigate Solar UV contribution to detector noise. In addition, results from laboratory tests performed with a 1 keV ion beam at the University of New Hampshire’s Space Sciences Facility confirm the effectiveness of the instrument’s solar wind particle rejection system.
Rabin, Bernard M; Joseph, James A; Shukitt-Hale, Barbara; Carrihill-Knoll, Kirsty L
2012-02-01
Previous research has shown a progressive deterioration in cognitive performance in rats exposed to (56)Fe particles as a function of age. The present experiment was designed to evaluate the effects of age of irradiation independently of the age of testing. Male Fischer-344 rats, 2, 7, 12, and 16 months of age, were exposed to 25-200 cGy of (56)Fe particles (1,000 MeV/n). Following irradiation, the rats were trained to make an operant response on an ascending fixed-ratio reinforcement schedule. When performance was evaluated as a function of both age of irradiation and testing, the results showed a significant effect of age on the dose needed to produce a performance decrement, such that older rats exposed to lower doses of (56)Fe particles showed a performance decrement compared to younger rats. When performance was evaluated as a function of age of irradiation with the age of testing held constant, the results indicated that age of irradiation was a significant factor influencing operant responding, such that older rats tested at similar ages and exposed to similar doses of (56)Fe particles showed similar performance decrements. The results are interpreted as indicating that the performance decrement is not a function of age per se, but instead is dependent upon an interaction between the age of irradiation, the age of testing, and exposure to HZE particles. The nature of these effects and how age of irradiation affects cognitive performance after an interval of 15 to 16 months remains to be established.
NASA Astrophysics Data System (ADS)
Evstatiev, Evstati; Svidzinski, Vladimir; Spencer, Andy; Galkin, Sergei
2014-10-01
Full wave 3-D modeling of RF fields in hot magnetized nonuniform plasma requires calculation of nonlocal conductivity kernel describing the dielectric response of such plasma to the RF field. In many cases, the conductivity kernel is a localized function near the test point which significantly simplifies numerical solution of the full wave 3-D problem. Preliminary results of feasibility analysis of numerical calculation of the conductivity kernel in a 3-D hot nonuniform magnetized plasma in the electron cyclotron frequency range will be reported. This case is relevant to modeling of ECRH in ITER. The kernel is calculated by integrating the linearized Vlasov equation along the unperturbed particle's orbits. Particle's orbits in the nonuniform equilibrium magnetic field are calculated numerically by one of the Runge-Kutta methods. RF electric field is interpolated on a specified grid on which the conductivity kernel is discretized. The resulting integrals in the particle's initial velocity and time are then calculated numerically. Different optimization approaches of the integration are tested in this feasibility analysis. Work is supported by the U.S. DOE SBIR program.
Modelling of Coke Layer Collapse during Ore Charging in Ironmaking Blast Furnace by DEM
NASA Astrophysics Data System (ADS)
Narita, Yoichi; Mio, Hiroshi; Orimoto, Takashi; Nomura, Seiji
2017-06-01
A technical issue in an ironmaking blast furnace operation is to realize the optimum layer thickness and the radial distribution of burden (ore and coke) to enhance its efficiency and productivity. When ore particles are charged onto the already-embedded coke layer, the coke layer-collapse phenomenon occurs. The coke layer-collapse phenomenon has a significant effect on the distribution of ore and coke layer thickness in the radial direction. In this paper, the mechanical properties of coke packed bed under ore charging were investigated by the impact-loading test and the large-scale direct shear test. Experimental results show that the coke particle is broken by the impact force of ore charging, and the particle breakage leads to weaken of coke-layer strength. The expression of contact force for coke in Discrete Element Method (DEM) was modified based on the measured data, and it followed by the 1/3-scaled experiment on coke's collapse phenomena. Comparing a simulation by modified model to the 1/3-scaled experiment, they agreed well in the burden distribution.
Source-receptor matrix calculation with a Lagrangian particle dispersion model in backward mode
NASA Astrophysics Data System (ADS)
Seibert, P.; Frank, A.
2004-01-01
The possibility to calculate linear-source receptor relationships for the transport of atmospheric trace substances with a Lagrangian particle dispersion model (LPDM) running in backward mode is shown and presented with many tests and examples. This mode requires only minor modifications of the forward LPDM. The derivation includes the action of sources and of any first-order processes (transformation with prescribed rates, dry and wet deposition, radioactive decay, etc.). The backward mode is computationally advantageous if the number of receptors is less than the number of sources considered. The combination of an LPDM with the backward (adjoint) methodology is especially attractive for the application to point measurements, which can be handled without artificial numerical diffusion. Practical hints are provided for source-receptor calculations with different settings, both in forward and backward mode. The equivalence of forward and backward calculations is shown in simple tests for release and sampling of particles, pure wet deposition, pure convective redistribution and realistic transport over a short distance. Furthermore, an application example explaining measurements of Cs-137 in Stockholm as transport from areas contaminated heavily in the Chernobyl disaster is included.
Disambiguating seesaw models using invariant mass variables at hadron colliders
Dev, P. S. Bhupal; Kim, Doojin; Mohapatra, Rabindra N.
2016-01-19
Here, we propose ways to distinguish between different mechanisms behind the collider signals of TeV-scale seesaw models for neutrino masses using kinematic endpoints of invariant mass variables. We particularly focus on two classes of such models widely discussed in literature: (i) Standard Model extended by the addition of singlet neutrinos and (ii) Left-Right Symmetric Models. Relevant scenarios involving the same "smoking-gun" collider signature of dilepton plus dijet with no missing transverse energy differ from one another by their event topology, resulting in distinctive relationships among the kinematic endpoints to be used for discerning them at hadron colliders. Furthermore, these kinematic endpoints are readily translated to the mass parameters of the on-shell particles through simple analytic expressions which can be used for measuring the masses of the new particles. We also conducted a Monte Carlo simulation with detector effects in order to test the viability of the proposed strategy in a realistic environment. Finally, we discuss the future prospects of testing these scenarios at themore » $$\\sqrt{s}$$ = 14 and 100TeV hadron colliders.« less
Evaluation strategies for isotope ratio measurements of single particles by LA-MC-ICPMS.
Kappel, S; Boulyga, S F; Dorta, L; Günther, D; Hattendorf, B; Koffler, D; Laaha, G; Leisch, F; Prohaska, T
2013-03-01
Data evaluation is a crucial step when it comes to the determination of accurate and precise isotope ratios computed from transient signals measured by multi-collector-inductively coupled plasma mass spectrometry (MC-ICPMS) coupled to, for example, laser ablation (LA). In the present study, the applicability of different data evaluation strategies (i.e. 'point-by-point', 'integration' and 'linear regression slope' method) for the computation of (235)U/(238)U isotope ratios measured in single particles by LA-MC-ICPMS was investigated. The analyzed uranium oxide particles (i.e. 9073-01-B, CRM U010 and NUSIMEP-7 test samples), having sizes down to the sub-micrometre range, are certified with respect to their (235)U/(238)U isotopic signature, which enabled evaluation of the applied strategies with respect to precision and accuracy. The different strategies were also compared with respect to their expanded uncertainties. Even though the 'point-by-point' method proved to be superior, the other methods are advantageous, as they take weighted signal intensities into account. For the first time, the use of a 'finite mixture model' is presented for the determination of an unknown number of different U isotopic compositions of single particles present on the same planchet. The model uses an algorithm that determines the number of isotopic signatures by attributing individual data points to computed clusters. The (235)U/(238)U isotope ratios are then determined by means of the slopes of linear regressions estimated for each cluster. The model was successfully applied for the accurate determination of different (235)U/(238)U isotope ratios of particles deposited on the NUSIMEP-7 test samples.
Implications of Atmospheric Test Fallout Data for Nuclear Winter.
NASA Astrophysics Data System (ADS)
Baker, George Harold, III
1987-09-01
Atmospheric test fallout data have been used to determine admissable dust particle size distributions for nuclear winter studies. The research was originally motivated by extreme differences noted in the magnitude and longevity of dust effects predicted by particle size distributions routinely used in fallout predictions versus those used for nuclear winter studies. Three different sets of historical data have been analyzed: (1) Stratospheric burden of Strontium -90 and Tungsten-185, 1954-1967 (92 contributing events); (2) Continental U.S. Strontium-90 fallout through 1958 (75 contributing events); (3) Local Fallout from selected Nevada tests (16 events). The contribution of dust to possible long term climate effects following a nuclear exchange depends strongly on the particle size distribution. The distribution affects both the atmospheric residence time and optical depth. One dimensional models of stratospheric/tropospheric fallout removal were developed and used to identify optimum particle distributions. Results indicate that particle distributions which properly predict bulk stratospheric activity transfer tend to be somewhat smaller than number size distributions used in initial nuclear winter studies. In addition, both ^{90}Sr and ^ {185}W fallout behavior is better predicted by the lognormal distribution function than the prevalent power law hybrid function. It is shown that the power law behavior of particle samples may well be an aberration of gravitational cloud stratification. Results support the possible existence of two independent particle size distributions in clouds generated by surface or near surface bursts. One distribution governs late time stratospheric fallout, the other governs early time fallout. A bimodal lognormal distribution is proposed to describe the cloud particle population. The distribution predicts higher initial sunlight attenuation and lower late time attenuation than the power law hybrid function used in initial nuclear winter studies.
NASA Astrophysics Data System (ADS)
Ghanbari, M.; Najafi, G.; Ghobadian, B.; Mamat, R.; Noor, M. M.; Moosavian, A.
2015-12-01
This paper studies the use of adaptive neuro-fuzzy inference system (ANFIS) to predict the performance parameters and exhaust emissions of a diesel engine operating on nanodiesel blended fuels. In order to predict the engine parameters, the whole experimental data were randomly divided into training and testing data. For ANFIS modelling, Gaussian curve membership function (gaussmf) and 200 training epochs (iteration) were found to be optimum choices for training process. The results demonstrate that ANFIS is capable of predicting the diesel engine performance and emissions. In the experimental step, Carbon nano tubes (CNT) (40, 80 and 120 ppm) and nano silver particles (40, 80 and 120 ppm) with nanostructure were prepared and added as additive to the diesel fuel. Six cylinders, four-stroke diesel engine was fuelled with these new blended fuels and operated at different engine speeds. Experimental test results indicated the fact that adding nano particles to diesel fuel, increased diesel engine power and torque output. For nano-diesel it was found that the brake specific fuel consumption (bsfc) was decreased compared to the net diesel fuel. The results proved that with increase of nano particles concentrations (from 40 ppm to 120 ppm) in diesel fuel, CO2 emission increased. CO emission in diesel fuel with nano-particles was lower significantly compared to pure diesel fuel. UHC emission with silver nano-diesel blended fuel decreased while with fuels that contains CNT nano particles increased. The trend of NOx emission was inverse compared to the UHC emission. With adding nano particles to the blended fuels, NOx increased compared to the net diesel fuel. The tests revealed that silver & CNT nano particles can be used as additive in diesel fuel to improve combustion of the fuel and reduce the exhaust emissions significantly.
Sharif, Elham; Kiely, Janice; Wraith, Patrick; Luxton, Richard
2013-05-01
A novel, integrated lysis and immunoassay methodology and system for intracellular protein measurement are described. The method uses paramagnetic particles both as a lysis agent and assay label resulting in a rapid test requiring minimal operator intervention, the test being homogeneous and completed in less than 10 min. A design study highlights the critical features of the magnetic detection system used to quantify the paramagnetic particles and a novel frequency-locked loop-based magnetometer is presented. A study of paramagnetic particle enhanced lysis demonstrates that the technique is more than twice as efficient at releasing intracellular protein as ultrasonic lysis alone. Results are presented for measurements of intracellular prostate specific antigen in an LNCAP cell line. This model was selected to demonstrate the rapidity and efficiency of intracellular protein quantification. It was shown that, on average, LNCAP cells contained 0.43 fg of prostate specific antigen. This system promises an attractive solution for applications that require a rapid determination of intracellular proteins.
In Silico Synthesis of Microgel Particles
2017-01-01
Microgels are colloidal-scale particles individually made of cross-linked polymer networks that can swell and deswell in response to external stimuli, such as changes to temperature or pH. Despite a large amount of experimental activities on microgels, a proper theoretical description based on individual particle properties is still missing due to the complexity of the particles. To go one step further, here we propose a novel methodology to assemble realistic microgel particles in silico. We exploit the self-assembly of a binary mixture composed of tetravalent (cross-linkers) and bivalent (monomer beads) patchy particles under spherical confinement in order to produce fully bonded networks. The resulting structure is then used to generate the initial microgel configuration, which is subsequently simulated with a bead–spring model complemented by a temperature-induced hydrophobic attraction. To validate our assembly protocol, we focus on a small microgel test case and show that we can reproduce the experimental swelling curve by appropriately tuning the confining sphere radius, something that would not be possible with less sophisticated assembly methodologies, e.g., in the case of networks generated from an underlying crystal structure. We further investigate the structure (in reciprocal and real space) and the swelling curves of microgels as a function of temperature, finding that our results are well described by the widely used fuzzy sphere model. This is a first step toward a realistic modeling of microgel particles, which will pave the way for a careful assessment of their elastic properties and effective interactions. PMID:29151620
Movement and collision of Lagrangian particles in hydro-turbine intakes: a case study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero-Gomez, Pedro; Richmond, Marshall C.
Studies of the stress/survival of migratory fish during downstream passage through operating hydro-turbines are normally conducted to determine the fish-friendliness of units. One field approach consisting of recording extreme hydraulics with autonomous sensors is largely sensitive to the conditions of sensor release and the initial trajectories at the turbine intake. This study applies a modelling strategy based on flow simulations using computational fluid dynamics and Lagrangian particle tracking to represent the travel of live fish and autonomous sensor devices through hydro-turbine intakes. For the flow field calculation, the simulations were conducted with both a time-averaging turbulence model and an eddy-resolvingmore » technique. For the particle tracking calculation, different modelling assumptions for turbulence forcing, mass formulation, buoyancy, and release condition were tested. The modelling assumptions are evaluated with respect to data sets collected using a laboratory physical model and an autonomous sensor device deployed at Ice Harbor Dam (Snake River, State of Washington, U.S.A.) at the same discharge and release point as in the present computer simulations. We found an acceptable agreement between the simulated results and observed data and discuss relevant features of Lagrangian particle movement that are critical in turbine design and in the experimental design of field studies.« less
Spectrum-doubled heavy vector bosons at the LHC
Appelquist, Thomas; Bai, Yang; Ingoldby, James; ...
2016-01-19
We study a simple effective field theory incorporating six heavy vector bosons together with the standard-model field content. The new particles preserve custodial symmetry as well as an approximate left-right parity symmetry. The enhanced symmetry of the model allows it to satisfy precision electroweak constraints and bounds from Higgs physics in a regime where all the couplings are perturbative and where the amount of fine-tuning is comparable to that in the standard model itself. We find that the model could explain the recently observed excesses in di-boson processes at invariant mass close to 2TeV from LHC Run 1 for amore » range of allowed parameter space. The masses of all the particles differ by no more than roughly 10%. In a portion of the allowed parameter space only one of the new particles has a production cross section large enough to be detectable with the energy and luminosity of Run 1, both via its decay to WZ and to Wh, while the others have suppressed production rates. Furthermore, the model can be tested at the higher-energy and higher-luminosity run of the LHC even for an overall scale of the new particles higher than 3TeV.« less
Test of level density models from reactions of Li6 on Fe58 and Li7 on Fe57
NASA Astrophysics Data System (ADS)
Oginni, B. M.; Grimes, S. M.; Voinov, A. V.; Adekola, A. S.; Brune, C. R.; Carter, D. E.; Heinen, Z.; Jacobs, D.; Massey, T. N.; O'Donnell, J. E.; Schiller, A.
2009-09-01
The reactions of Li6 on Fe58 and Li7 on Fe57 have been studied at 15 MeV beam energy. These two reactions produce the same compound nucleus, Cu64. The charged particle spectra were measured at backward angles. The data obtained have been compared with Hauser-Feshbach model calculations. The level density parameters of Ni63 and Co60 have been obtained from the particle evaporation spectra. We also find contributions from the break up of the lithium projectiles to the low energy region of the α spectra.
Coarse-Grained Model for Water Involving a Virtual Site.
Deng, Mingsen; Shen, Hujun
2016-02-04
In this work, we propose a new coarse-grained (CG) model for water by combining the features of two popular CG water models (BMW and MARTINI models) as well as by adopting a topology similar to that of the TIP4P water model. In this CG model, a CG unit, representing four real water molecules, consists of a virtual site, two positively charged particles, and a van der Waals (vdW) interaction center. Distance constraint is applied to the bonds formed between the vdW interaction center and the positively charged particles. The virtual site, which carries a negative charge, is determined by the locations of the two positively charged particles and the vdW interaction center. For the new CG model of water, we coined the name "CAVS" (charge is attached to a virtual site) due to the involvment of the virtual site. After being tested in molecular dynamic (MD) simulations of bulk water at various time steps, under different temperatures and in different salt (NaCl) concentrations, the CAVS model offers encouraging predictions for some bulk properties of water (such as density, dielectric constant, etc.) when compared to experimental ones.
NASA Technical Reports Server (NTRS)
Roesler, Collin S.; Pery, Mary Jane
1995-01-01
An inverse model was developed to extract the absortion and scattering (elastic and inelastic) properties of oceanic constituents from surface spectral reflectance measurements. In particular, phytoplankton spectral absorption coefficients, solar-stimulated chlorophyll a fluorescence spectra, and particle backscattering spectra were modeled. The model was tested on 35 reflectance spectra obtained from irradiance measurements in optically diverse ocean waters (0.07 to 25.35 mg/cu m range in surface chlorophyll a concentrations). The universality of the model was demonstrated by the accurate estimation of the spectral phytoplankton absorption coefficents over a range of 3 orders of magnitude (rho = 0.94 at 500 nm). Under most oceanic conditions (chlorophyll a less than 3 mg/cu m) the percent difference between measured and modeled phytoplankton absorption coefficents was less than 35%. Spectral variations in measured phytoplankton absorption spectra were well predicted by the inverse model. Modeled volume fluorescence was weakly correlated with measured chl a; fluorescence quantum yield varied from 0.008 to 0.09 as a function of environment and incident irradiance. Modeled particle backscattering coefficients were linearly related to total particle cross section over a twentyfold range in backscattering coefficents (rho = 0.996, n = 12).
Velaga, Sitaram P; Djuris, Jelena; Cvijic, Sandra; Rozou, Stavroula; Russo, Paola; Colombo, Gaia; Rossi, Alessandra
2018-02-15
In vitro dissolution testing is routinely used in the development of pharmaceutical products. Whilst the dissolution testing methods are well established and standardized for oral dosage forms, i.e. tablets and capsules, there are no pharmacopoeia methods or regulatory requirements for testing the dissolution of orally inhaled powders. Despite this, a wide variety of dissolution testing methods for orally inhaled powders has been developed and their bio-relevance has been evaluated. This review provides an overview of the in vitro dissolution methodologies for dry inhalation products, with particular emphasis on dry powder inhalers, where the dissolution behavior of the respirable particles can have a role on duration and absorption of the drug. Dissolution mechanisms of respirable particles as well as kinetic models have been presented. A more recent biorelevant dissolution set-ups and media for studying inhalation biopharmaceutics were also reviewed. In addition, factors affecting interplay between dissolution and absorption of deposited particles in the context of biopharmaceutical considerations of inhalation products were examined. Copyright © 2017 Elsevier B.V. All rights reserved.
Energetic Particles Dynamics in Mercury's Magnetosphere
NASA Technical Reports Server (NTRS)
Walsh, Brian M.; Ryou, A.S.; Sibeck, D. G.; Alexeev, I. I.
2013-01-01
We investigate the drift paths of energetic particles in Mercury's magnetosphere by tracing their motion through a model magnetic field. Test particle simulations solving the full Lorentz force show a quasi-trapped energetic particle population that gradient and curvature drift around the planet via "Shabansky" orbits, passing though high latitudes in the compressed dayside by equatorial latitudes on the nightside. Due to their large gyroradii, energetic H+ and Na+ ions will typically collide with the planet or the magnetopause and will not be able to complete a full drift orbit. These simulations provide direct comparison for recent spacecraft measurements from MESSENGER. Mercury's offset dipole results in an asymmetric loss cone and therefore an asymmetry in particle precipitation with more particles precipitating in the southern hemisphere. Since the planet lacks an atmosphere, precipitating particles will collide directly with the surface of the planet. The incident charged particles can kick up neutrals from the surface and have implications for the formation of the exosphere and weathering of the surface
Physics with e{sup +}e{sup -} Linear Colliders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barklow, Timothy L
2003-05-05
We describe the physics potential of e{sup +}e{sup -} linear colliders in this report. These machines are planned to operate in the first phase at a center-of-mass energy of 500 GeV, before being scaled up to about 1 TeV. In the second phase of the operation, a final energy of about 2 TeV is expected. The machines will allow us to perform precision tests of the heavy particles in the Standard Model, the top quark and the electroweak bosons. They are ideal facilities for exploring the properties of Higgs particles, in particular in the intermediate mass range. New vector bosonsmore » and novel matter particles in extended gauge theories can be searched for and studied thoroughly. The machines provide unique opportunities for the discovery of particles in supersymmetric extensions of the Standard Model, the spectrum of Higgs particles, the supersymmetric partners of the electroweak gauge and Higgs bosons, and of the matter particles. High precision analyses of their properties and interactions will allow for extrapolations to energy scales close to the Planck scale where gravity becomes significant. In alternative scenarios, like compositeness models, novel matter particles and interactions can be discovered and investigated in the energy range above the existing colliders up to the TeV scale. Whatever scenario is realized in Nature, the discovery potential of e{sup +}e{sup -} linear colliders and the high-precision with which the properties of particles and their interactions can be analyzed, define an exciting physics programme complementary to hadron machines.« less
NASA Technical Reports Server (NTRS)
Berg, Melanie; Label, Kenneth; Campola, Michael; Xapsos, Michael
2017-01-01
We propose a method for the application of single event upset (SEU) data towards the analysis of complex systems using transformed reliability models (from the time domain to the particle fluence domain) and space environment data.
Design and evaluation of an inlet conditioner to dry particles for real-time particle sizers.
Peters, Thomas M; Riss, Adam L; Holm, Ricky L; Singh, Manisha; Vanderpool, Robert W
2008-04-01
Real-time particle sizers provide rapid information about atmospheric particles, particularly peak exposures, which may be important in the development of adverse health outcomes. However, these instruments are subject to erroneous readings in high-humidity environments when compared with measurements from filter-based, federal reference method (FRM) samplers. Laboratory tests were conducted to evaluate the ability of three inlet conditioners to dry aerosol prior to entering a real-time particle sizer for measuring coarse aerosols (Model 3321 Aerodynamic Particle Sizer, APS) under simulated highly humid conditions. Two 30 day field studies in Birmingham, AL, USA were conducted to compare the response of two APSs operated with and without an inlet conditioner to that measured with FRM samplers. In field studies, the correlation of PM(10-2.5) derived from the APS and that measured with the FRM was substantially stronger with an inlet conditioner applied (r2 ranged from 0.91 to 0.99) than with no conditioner (r2 = 0.61). Laboratory experiments confirmed the ability of the heater and desiccant conditioner to remove particle-borne moisture. In field tests, water was found associated with particles across the sizing range of the APS (0.5 microm to 20 microm) when relative humidity was high in Birmingham. Certain types of inlet conditioners may substantially improve the correlation between particulate mass concentration derived from real-time particle sizers and filter-based samplers in humid conditions.
NASA Astrophysics Data System (ADS)
Bagheri, G.; Bonadonna, C.; Manzella, I.; Pontelandolfo, P.; Haas, P.
2012-12-01
A complete understanding and parameterization of both particle sedimentation and particle aggregation require systematic and detailed laboratory investigations performed in controlled conditions. For this purpose, a dedicated 4-meter-high vertical wind tunnel has been designed and constructed at the University of Geneva in collaboration with the Groupe de compétence en mécanique des fluides et procédés énergétiques (CMEFE). Final design is a result of Computational Fluid Dynamics simulations combined with laboratory tests. With its diverging test section, the tunnel is designed to suspend particles of different shapes and sizes in order to study the aero-dynamical behavior of volcanic particles and their collision and aggregation. In current set-up, velocities between 5.0 to 27 ms-1 can be obtained, which correspond to typical volcanic particles with diameters between 10 to 40 mm. A combination of Particle Tracking Velocimetry (PTV) and statistical methods is used to derive particle terminal velocity. The method is validated using smooth spherical particles with known drag coefficient. More than 120 particles of different shapes (i.e. spherical, regular and volcanic) and compositions are 3D-scanned and almost 1 million images of their suspension in the test section of wind tunnel are recorded by a high speed camera and analyzed by a PTV code specially developed for the wind tunnel. Measured values of terminal velocity for tested particles are between 3.6 and 24.9 ms-1 which corresponds to Reynolds numbers between 8×103 and 1×105. In addition to the vertical wind tunnel, an apparatus with height varying between 0.5 and 3.5 m has been built to measure terminal velocity of micrometric particles in Reynolds number between 4 and 100. In these experiments, particles are released individually in the air at top of the apparatus and their terminal velocities are measured at the bottom of apparatus by a combination of high-speed camera imaging and PTV post-analyzing. Effects of shape, porosity and orientation of the particles on their terminal velocity are studied. Various shape factors are measured based on different methods, such as 3D-scanning, 2D-image processing, SEM image analysis, caliper measurements, pycnometer and buoyancy tests. Our preliminary experiments on non-smooth spherical particles and irregular particles reveal some interesting aspects. First, the effect of surface roughness and porosity is more important for spherical particles than for regular non-spherical and irregular particles. Second, results underline how, the aero-dynamical behavior of individual irregular particles is better characterized by a range of values of drag coefficients instead of a single value. Finally, since all the shape factors are calculated precisely for each individual particle, the resulted database can provide important information to benchmark and improve existing terminal-velocity models. Modifications of the wind tunnel, i.e. very low air speed (0.03-5.0 ms-1) for suspension of micrometric particles, and of the PTV code, i.e. multiple particle tracking and collision counting, have also been performed in combination to the installation of a particle charging device, a controlled humidifier and a high-power chiller (to reach values down to -20 °C) in order to investigate both wet and dry aggregation of volcanic particles.
Koivisto, Antti J; Jensen, Alexander C Ø; Kling, Kirsten I; Kling, Jens; Budtz, Hans Christian; Koponen, Ismo K; Tuinman, Ilse; Hussein, Tareq; Jensen, Keld A; Nørgaard, Asger; Levin, Marcus
2018-01-05
Here, we studied the particle release rate during Electrostatic spray deposition of anatase-(TiO 2 )-based photoactive coating onto tiles and wallpaper using a commercially available electrostatic spray device. Spraying was performed in a 20.3m 3 test chamber while measuring concentrations of 5.6nm to 31μm-size particles and volatile organic compounds (VOC), as well as particle deposition onto room surfaces and on the spray gun user hand. The particle emission and deposition rates were quantified using aerosol mass balance modelling. The geometric mean particle number emission rate was 1.9×10 10 s -1 and the mean mass emission rate was 381μgs -1 . The respirable mass emission-rate was 65% lower than observed for the entire measured size-range. The mass emission rates were linearly scalable (±ca. 20%) to the process duration. The particle deposition rates were up to 15h -1 for <1μm-size and the deposited particles consisted of mainly TiO 2 , TiO 2 mixed with Cl and/or Ag, TiO 2 particles coated with carbon, and Ag particles with size ranging from 60nm to ca. 5μm. As expected, no significant VOC emissions were observed as a result of spraying. Finally, we provide recommendations for exposure model parameterization. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Fuzzy neural network technique for system state forecasting.
Li, Dezhi; Wang, Wilson; Ismail, Fathy
2013-10-01
In many system state forecasting applications, the prediction is performed based on multiple datasets, each corresponding to a distinct system condition. The traditional methods dealing with multiple datasets (e.g., vector autoregressive moving average models and neural networks) have some shortcomings, such as limited modeling capability and opaque reasoning operations. To tackle these problems, a novel fuzzy neural network (FNN) is proposed in this paper to effectively extract information from multiple datasets, so as to improve forecasting accuracy. The proposed predictor consists of both autoregressive (AR) nodes modeling and nonlinear nodes modeling; AR models/nodes are used to capture the linear correlation of the datasets, and the nonlinear correlation of the datasets are modeled with nonlinear neuron nodes. A novel particle swarm technique [i.e., Laplace particle swarm (LPS) method] is proposed to facilitate parameters estimation of the predictor and improve modeling accuracy. The effectiveness of the developed FNN predictor and the associated LPS method is verified by a series of tests related to Mackey-Glass data forecast, exchange rate data prediction, and gear system prognosis. Test results show that the developed FNN predictor and the LPS method can capture the dynamics of multiple datasets effectively and track system characteristics accurately.
NASA Technical Reports Server (NTRS)
Zhang, Ming
2005-01-01
The primary goal of this project was to perform theoretical calculations of propagation of cosmic rays and energetic particles in 3-dimensional heliospheric magnetic fields. We used Markov stochastic process simulation to achieve to this goal. We developed computation software that can be used to study particle propagation in, as two examples of heliospheric magnetic fields that have to be treated in 3 dimensions, a heliospheric magnetic field suggested by Fisk (1996) and a global heliosphere including the region beyond the termination shock. The results from our model calculations were compared with particle measurements from Ulysses, Earth-based spacecraft such as IMP-8, WIND and ACE, Voyagers and Pioneers in outer heliosphere for tests of the magnetic field models. We particularly looked for features of particle variations that can allow us to significantly distinguish the Fisk magnetic field from the conventional Parker spiral field. The computer code will eventually lead to a new generation of integrated software for solving complicated problems of particle acceleration, propagation and modulation in realistic 3-dimensional heliosphere of realistic magnetic fields and the solar wind with a single computation approach.
Meta-analysis inside and outside particle physics: two traditions that should converge?
Baker, Rose D; Jackson, Dan
2013-06-01
The use of meta-analysis in medicine and epidemiology really took off in the 1970s. However, in high-energy physics, the Particle Data Group has been carrying out meta-analyses of measurements of particle masses and other properties since 1957. Curiously, there has been virtually no interaction between those working inside and outside particle physics. In this paper, we use statistical models to study two major differences in practice. The first is the usefulness of systematic errors, which physicists are now beginning to quote in addition to statistical errors. The second is whether it is better to treat heterogeneity by scaling up errors as do the Particle Data Group or by adding a random effect as does the rest of the community. Besides fitting models, we derive and use an exact test of the error-scaling hypothesis. We also discuss the other methodological differences between the two streams of meta-analysis. Our conclusion is that systematic errors are not currently very useful and that the conventional random effects model, as routinely used in meta-analysis, has a useful role to play in particle physics. The moral we draw for statisticians is that we should be more willing to explore 'grassroots' areas of statistical application, so that good statistical practice can flow both from and back to the statistical mainstream. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.
Modeling of a Turbofan Engine with Ice Crystal Ingestion in the NASA Propulsion System Laboratory
NASA Technical Reports Server (NTRS)
Veres, Joseph P.; Jorgenson, Philip C. E.; Jones, Scott M.; Nili, Samaun
2017-01-01
The main focus of this study is to apply a computational tool for the flow analysis of the turbine engine that has been tested with ice crystal ingestion in the Propulsion Systems Laboratory (PSL) at NASA Glenn Research Center. The PSL has been used to test a highly instrumented Honeywell ALF502R-5A (LF11) turbofan engine at simulated altitude operating conditions. Test data analysis with an engine cycle code and a compressor flow code was conducted to determine the values of key icing parameters, that can indicate the risk of ice accretion, which can lead to engine rollback (un-commanded loss of engine thrust). The full engine aerothermodynamic performance was modeled with the Honeywell Customer Deck specifically created for the ALF502R-5A engine. The mean-line compressor flow analysis code, which includes a code that models the state of the ice crystal, was used to model the air flow through the fan-core and low pressure compressor. The results of the compressor flow analyses included calculations of the ice-water flow rate to air flow rate ratio (IWAR), the local static wet bulb temperature, and the particle melt ratio throughout the flow field. It was found that the assumed particle size had a large effect on the particle melt ratio, and on the local wet bulb temperature. In this study the particle size was varied parametrically to produce a non-zero calculated melt ratio in the exit guide vane (EGV) region of the low pressure compressor (LPC) for the data points that experienced a growth of blockage there, and a subsequent engine called rollback (CRB). At data points where the engine experienced a CRB having the lowest wet bulb temperature of 492 degrees Rankine at the EGV trailing edge, the smallest particle size that produced a non-zero melt ratio (between 3 percent - 4 percent) was on the order of 1 micron. This value of melt ratio was utilized as the target for all other subsequent data points analyzed, while the particle size was varied from 1 micron - 9.5 microns to achieve the target melt ratio. For data points that did not experience a CRB which had static wet bulb temperatures in the EGV region below 492 degrees Rankine, a non-zero melt ratio could not be achieved even with a 1 micron ice particle size. The highest value of static wet bulb temperature for data points that experienced engine CRB was 498 degrees Rankine with a particle size of 9.5 microns. Based on this study of the LF11 engine test data, the range of static wet bulb temperature at the EGV exit for engine CRB was in the narrow range of 492 degrees Rankine - 498 degrees Rankine , while the minimum value of IWAR was 0.002. The rate of blockage growth due to ice accretion and boundary layer growth was estimated by scaling from a known blockage growth rate that was determined in a previous study. These results obtained from the LF11 engine analysis formed the basis of a unique “icing wedge.”
Ion mobilities in diatomic gases: measurement versus prediction with non-specular scattering models.
Larriba, Carlos; Hogan, Christopher J
2013-05-16
Ion/electrical mobility measurements of nanoparticles and polyatomic ions are typically linked to particle/ion physical properties through either application of the Stokes-Millikan relationship or comparison to mobilities predicted from polyatomic models, which assume that gas molecules scatter specularly and elastically from rigid structural models. However, there is a discrepancy between these approaches; when specular, elastic scattering models (i.e., elastic-hard-sphere scattering, EHSS) are applied to polyatomic models of nanometer-scale ions with finite-sized impinging gas molecules, predictions are in substantial disagreement with the Stokes-Millikan equation. To rectify this discrepancy, we developed and tested a new approach for mobility calculations using polyatomic models in which non-specular (diffuse) and inelastic gas-molecule scattering is considered. Two distinct semiempirical models of gas-molecule scattering from particle surfaces were considered. In the first, which has been traditionally invoked in the study of aerosol nanoparticles, 91% of collisions are diffuse and thermally accommodating, and 9% are specular and elastic. In the second, all collisions are considered to be diffuse and accommodating, but the average speed of the gas molecules reemitted from a particle surface is 8% lower than the mean thermal speed at the particle temperature. Both scattering models attempt to mimic exchange between translational, vibrational, and rotational modes of energy during collision, as would be expected during collision between a nonmonoatomic gas molecule and a nonfrozen particle surface. The mobility calculation procedure was applied considering both hard-sphere potentials between gas molecules and the atoms within a particle and the long-range ion-induced dipole (polarization) potential. Predictions were compared to previous measurements in air near room temperature of multiply charged poly(ethylene glycol) (PEG) ions, which range in morphology from compact to highly linear, and singly charged tetraalkylammonium cations. It was found that both non-specular, inelastic scattering rules lead to excellent agreement between predictions and experimental mobility measurements (within 5% of each other) and that polarization potentials must be considered to make correct predictions for high-mobility particles/ions. Conversely, traditional specular, elastic scattering models were found to substantially overestimate the mobilities of both types of ions.
Equations of motion of test particles for solving the spin-dependent Boltzmann–Vlasov equation
Xia, Yin; Xu, Jun; Li, Bao-An; ...
2016-06-16
A consistent derivation of the equations of motion (EOMs) of test particles for solving the spin-dependent Boltzmann–Vlasov equation is presented. The resulting EOMs in phase space are similar to the canonical equations in Hamiltonian dynamics, and the EOM of spin is the same as that in the Heisenburg picture of quantum mechanics. Considering further the quantum nature of spin and choosing the direction of total angular momentum in heavy-ion reactions as a reference of measuring nucleon spin, the EOMs of spin-up and spin-down nucleons are given separately. The key elements affecting the spin dynamics in heavy-ion collisions are identified. Themore » resulting EOMs provide a solid foundation for using the test-particle approach in studying spin dynamics in heavy-ion collisions at intermediate energies. Future comparisons of model simulations with experimental data will help to constrain the poorly known in-medium nucleon spin–orbit coupling relevant for understanding properties of rare isotopes and their astrophysical impacts.« less
Yates, Christian A; Flegg, Mark B
2015-05-06
Spatial reaction-diffusion models have been employed to describe many emergent phenomena in biological systems. The modelling technique most commonly adopted in the literature implements systems of partial differential equations (PDEs), which assumes there are sufficient densities of particles that a continuum approximation is valid. However, owing to recent advances in computational power, the simulation and therefore postulation, of computationally intensive individual-based models has become a popular way to investigate the effects of noise in reaction-diffusion systems in which regions of low copy numbers exist. The specific stochastic models with which we shall be concerned in this manuscript are referred to as 'compartment-based' or 'on-lattice'. These models are characterized by a discretization of the computational domain into a grid/lattice of 'compartments'. Within each compartment, particles are assumed to be well mixed and are permitted to react with other particles within their compartment or to transfer between neighbouring compartments. Stochastic models provide accuracy, but at the cost of significant computational resources. For models that have regions of both low and high concentrations, it is often desirable, for reasons of efficiency, to employ coupled multi-scale modelling paradigms. In this work, we develop two hybrid algorithms in which a PDE in one region of the domain is coupled to a compartment-based model in the other. Rather than attempting to balance average fluxes, our algorithms answer a more fundamental question: 'how are individual particles transported between the vastly different model descriptions?' First, we present an algorithm derived by carefully redefining the continuous PDE concentration as a probability distribution. While this first algorithm shows very strong convergence to analytical solutions of test problems, it can be cumbersome to simulate. Our second algorithm is a simplified and more efficient implementation of the first, it is derived in the continuum limit over the PDE region alone. We test our hybrid methods for functionality and accuracy in a variety of different scenarios by comparing the averaged simulations with analytical solutions of PDEs for mean concentrations. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Ground truth methods for optical cross-section modeling of biological aerosols
NASA Astrophysics Data System (ADS)
Kalter, J.; Thrush, E.; Santarpia, J.; Chaudhry, Z.; Gilberry, J.; Brown, D. M.; Brown, A.; Carter, C. C.
2011-05-01
Light detection and ranging (LIDAR) systems have demonstrated some capability to meet the needs of a fastresponse standoff biological detection method for simulants in open air conditions. These systems are designed to exploit various cloud signatures, such as differential elastic backscatter, fluorescence, and depolarization in order to detect biological warfare agents (BWAs). However, because the release of BWAs in open air is forbidden, methods must be developed to predict candidate system performance against real agents. In support of such efforts, the Johns Hopkins University Applied Physics Lab (JHU/APL) has developed a modeling approach to predict the optical properties of agent materials from relatively simple, Biosafety Level 3-compatible bench top measurements. JHU/APL has fielded new ground truth instruments (in addition to standard particle sizers, such as the Aerodynamic particle sizer (APS) or GRIMM aerosol monitor (GRIMM)) to more thoroughly characterize the simulant aerosols released in recent field tests at Dugway Proving Ground (DPG). These instruments include the Scanning Mobility Particle Sizer (SMPS), the Ultraviolet Aerodynamic Particle Sizer (UVAPS), and the Aspect Aerosol Size and Shape Analyser (Aspect). The SMPS was employed as a means of measuring smallparticle concentrations for more accurate Mie scattering simulations; the UVAPS, which measures size-resolved fluorescence intensity, was employed as a path toward fluorescence cross section modeling; and the Aspect, which measures particle shape, was employed as a path towards depolarization modeling.
NASA Technical Reports Server (NTRS)
Gray, Perry; Guven, Ibrahim
2016-01-01
A new facility for making small particle impacts is being developed at NASA. Current sand/particle impact facilities are an erosion test and do not precisely measure and document the size and velocity of each of the impacting particles. In addition, evidence of individual impacts is often obscured by subsequent impacts. This facility will allow the number, size, and velocity of each particle to be measured and adjusted. It will also be possible to determine which particle produced damage at a given location on the target. The particle size and velocity will be measured by high speed imaging techniques. Information as to the extent of damage and debris from impacts will also be recorded. It will be possible to track these secondary particles, measuring size and velocity. It is anticipated that this additional degree of detail will provide input for erosion models and also help determine the impact physics of the erosion process. Particle impacts will be recorded at 90 degrees to the particle flight path and also from the top looking through the target window material.
Modeling and simulation of dust behaviors behind a moving vehicle
NASA Astrophysics Data System (ADS)
Wang, Jingfang
Simulation of physically realistic complex dust behaviors is a difficult and attractive problem in computer graphics. A fast, interactive and visually convincing model of dust behaviors behind moving vehicles is very useful in computer simulation, training, education, art, advertising, and entertainment. In my dissertation, an experimental interactive system has been implemented for the simulation of dust behaviors behind moving vehicles. The system includes physically-based models, particle systems, rendering engines and graphical user interface (GUI). I have employed several vehicle models including tanks, cars, and jeeps to test and simulate in different scenarios and conditions. Calm weather, winding condition, vehicle turning left or right, and vehicle simulation controlled by users from the GUI are all included. I have also tested the factors which play against the physical behaviors and graphics appearances of the dust particles through GUI or off-line scripts. The simulations are done on a Silicon Graphics Octane station. The animation of dust behaviors is achieved by physically-based modeling and simulation. The flow around a moving vehicle is modeled using computational fluid dynamics (CFD) techniques. I implement a primitive variable and pressure-correction approach to solve the three dimensional incompressible Navier Stokes equations in a volume covering the moving vehicle. An alternating- direction implicit (ADI) method is used for the solution of the momentum equations, with a successive-over- relaxation (SOR) method for the solution of the Poisson pressure equation. Boundary conditions are defined and simplified according to their dynamic properties. The dust particle dynamics is modeled using particle systems, statistics, and procedure modeling techniques. Graphics and real-time simulation techniques, such as dynamics synchronization, motion blur, blending, and clipping have been employed in the rendering to achieve realistic appearing dust behaviors. In addition, I introduce a temporal smoothing technique to eliminate the jagged effect caused by large simulation time. Several algorithms are used to speed up the simulation. For example, pre-calculated tables and display lists are created to replace some of the most commonly used functions, scripts and processes. The performance study shows that both time and space costs of the algorithms are linear in the number of particles in the system. On a Silicon Graphics Octane, three vehicles with 20,000 particles run at 6-8 frames per second on average. This speed does not include the extra calculations of convergence of the numerical integration for fluid dynamics which usually takes about 4-5 minutes to achieve steady state.
Elevated-Temperature Mechanical Properties of Lead-Free Sn-0.7Cu- xSiC Nanocomposite Solders
NASA Astrophysics Data System (ADS)
Mohammadi, A.; Mahmudi, R.
2018-02-01
Mechanical properties of Sn-0.7 wt.%Cu lead-free solder alloy reinforced with 0 vol.%, 1 vol.%, 2 vol.%, and 3 vol.% 100-nm SiC particles have been assessed using the shear punch testing technique in the temperature range from 25°C to 125°C. The composite materials were fabricated by the powder metallurgy route by blending, compacting, sintering, and finally extrusion. The 2 vol.% SiC-containing composite showed superior mechanical properties. In all conditions, the shear strength was adversely affected by increasing test temperature, and the 2 vol.% SiC-containing composite showed superior mechanical properties. Depending on the test temperature, the shear yield stress and ultimate shear strength increased, respectively, by 3 MPa to 4 MPa and 4 MPa to 5.5 MPa, in the composite materials. The strength enhancement was mostly attributed to the Orowan particle strengthening mechanism due to the SiC nanoparticles, and to a lesser extent to the coefficient of thermal expansion mismatch between the particles and matrix in the composite solder. A modified shear lag model was used to predict the total strengthening achieved by particle addition, based on the contribution of each of the above mechanisms.
Consequences of the Breakout Model for Particle Acceleration in CMEs and Flares
NASA Technical Reports Server (NTRS)
Antiochos, S. K.; Karpen, J. T.; DeVore, C. R.
2011-01-01
The largest and most efficient particle accelerators in the solar system are the giant events consisting of a fast coronal mass ejection (CME) and an intense X-class solar flare. Both flares and CMEs can produce l0(exp 32) ergs or more in nonthermal particles. Two general processes are believed to be responsible: particle acceleration at the strong shock ahead of the CME, and reconnection-driven acceleration in the flare current sheet. Although shock acceleration is relatively well understood, the mechanism by which flare reconnection produces nonthermal particles is still an issue of great debate. We address the question of CME/flare particle acceleration in the context of the breakout model using 2.5D MHD simulations with adaptive mesh refinement (AMR). The AMR capability allows us to achieve ultra-high numerical resolution and, thereby, determine the detailed structure and dynamics of the flare reconnection region. Furthermore, we employ newly developed numerical analysis tools for identifying and characterizing magnetic nulls, so that we can quantify accurately the number and location of magnetic islands during reconnection. Our calculations show that flare reconnection is dominated by the formation of magnetic islands. In agreement with many other studies, we find that the number of islands scales with the effective Lundquist number. This result supports the recent work by Drake and co-workers that postulates particle acceleration by magnetic islands. On the other hand, our calculations also show that the flare reconnection region is populated by numerous shocks and other indicators of strong turbulence, which can also accelerate particles. We discuss the implications of our calculations for the flare particle acceleration mechanism and for observational tests of the models.
Kinsey, J S; Hays, M D; Dong, Y; Williams, D C; Logan, R
2011-04-15
This paper addresses the need for detailed chemical information on the fine particulate matter (PM) generated by commercial aviation engines. The exhaust plumes of seven turbofan engine models were sampled as part of the three test campaigns of the Aircraft Particle Emissions eXperiment (APEX). In these experiments, continuous measurements of black carbon (BC) and particle surface-bound polycyclic aromatic compounds (PAHs) were conducted. In addition, time-integrated sampling was performed for bulk elemental composition, water-soluble ions, organic and elemental carbon (OC and EC), and trace semivolatile organic compounds (SVOCs). The continuous BC and PAH monitoring showed a characteristic U-shaped curve of the emission index (EI or mass of pollutant/mass of fuel burned) vs fuel flow for the turbofan engines tested. The time-integrated EIs for both elemental composition and water-soluble ions were heavily dominated by sulfur and SO(4)(2-), respectively, with a ∼2.4% median conversion of fuel S(IV) to particle S(VI). The corrected OC and EC emission indices obtained in this study ranged from 37 to 83 mg/kg and 21 to 275 mg/kg, respectively, with the EC/OC ratio ranging from ∼0.3 to 7 depending on engine type and test conditions. Finally, the particle SVOC EIs varied by as much as 2 orders of magnitude with distinct variations in chemical composition observed for different engine types and operating conditions.
Thoury-Monbrun, Valentin; Gaucel, Sébastien; Rouessac, Vincent; Guillard, Valérie; Angellier-Coussy, Hélène
2018-06-15
This study aims at assessing the use of a quartz crystal microbalance (QCM) coupled with an adsorption system to measure water vapor transfer properties in micrometric size cellulose particles. This apparatus allows measuring successfully water vapor sorption kinetics at successive relative humidity (RH) steps on a dispersion of individual micrometric size cellulose particles (1 μg) with a total acquisition duration of the order of one hour. Apparent diffusivity and water uptake at equilibrium were estimated at each step of RH by considering two different particle geometries in mass transfer modeling, i.e. sphere or finite cylinder, based on the results obtained from image analysis. Water vapor diffusivity values varied from 2.4 × 10 -14 m 2 s -1 to 4.2 × 10 -12 m 2 s -1 over the tested RH range (0-80%) whatever the model used. A finite cylinder or spherical geometry could be used equally for diffusivity identification for a particle size aspect ratio lower than 2. Copyright © 2018 Elsevier Ltd. All rights reserved.
Detailed numerical investigation of the Bohm limit in cosmic ray diffusion theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hussein, M.; Shalchi, A., E-mail: m_hussein@physics.umanitoba.ca, E-mail: andreasm4@yahoo.com
2014-04-10
A standard model in cosmic ray diffusion theory is the so-called Bohm limit in which the particle mean free path is assumed to be equal to the Larmor radius. This type of diffusion is often employed to model the propagation and acceleration of energetic particles. However, recent analytical and numerical work has shown that standard Bohm diffusion is not realistic. In the present paper, we perform test-particle simulations to explore particle diffusion in the strong turbulence limit in which the wave field is much stronger than the mean magnetic field. We show that there is indeed a lower limit ofmore » the particle mean free path along the mean field. In this limit, the mean free path is directly proportional to the unperturbed Larmor radius like in the traditional Bohm limit, but it is reduced by the factor δB/B {sub 0} where B {sub 0} is the mean field and δB the turbulent field. Although we focus on parallel diffusion, we also explore diffusion across the mean field in the strong turbulence limit.« less
A challenge to lepton universality in B-meson decays
Ciezarek, Gregory; Franco Sevilla, Manuel; Hamilton, Brian; ...
2017-06-07
One of the key assumptions of the standard model of particle physics is that the interactions of the charged leptons, namely electrons, muons and taus, differ only because of their different masses. Whereas precision tests comparing processes involving electrons and muons have not revealed any definite violation of this assumption, recent studies of B-meson decays involving the higher-mass tau lepton have resulted in observations that challenge lepton universality at the level of four standard deviations. Here, a confirmation of these results would point to new particles or interactions, and could have profound implications for our understanding of particle physics.
Scalability Test of Multiscale Fluid-Platelet Model for Three Top Supercomputers
Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny
2016-01-01
We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources. PMID:27570250
Sato, Tatsuhiko; Kase, Yuki; Watanabe, Ritsuko; Niita, Koji; Sihver, Lembit
2009-01-01
Microdosimetric quantities such as lineal energy, y, are better indexes for expressing the RBE of HZE particles in comparison to LET. However, the use of microdosimetric quantities in computational dosimetry is severely limited because of the difficulty in calculating their probability densities in macroscopic matter. We therefore improved the particle transport simulation code PHITS, providing it with the capability of estimating the microdosimetric probability densities in a macroscopic framework by incorporating a mathematical function that can instantaneously calculate the probability densities around the trajectory of HZE particles with a precision equivalent to that of a microscopic track-structure simulation. A new method for estimating biological dose, the product of physical dose and RBE, from charged-particle therapy was established using the improved PHITS coupled with a microdosimetric kinetic model. The accuracy of the biological dose estimated by this method was tested by comparing the calculated physical doses and RBE values with the corresponding data measured in a slab phantom irradiated with several kinds of HZE particles. The simulation technique established in this study will help to optimize the treatment planning of charged-particle therapy, thereby maximizing the therapeutic effect on tumors while minimizing unintended harmful effects on surrounding normal tissues.
Local protoplanetary disk ionisation by T Tauri star energetic particles
NASA Astrophysics Data System (ADS)
Fraschetti, F.; Drake, J.; Cohen, O.; Garraffo, C.
2017-10-01
The evolution of protoplanetary disks is believed to be driven largely by viscosity. The ionization of the disk that gives rise to viscosity is caused by X-rays from the central star or by energetic particles released by shock waves travelling into the circumstellar medium. We have performed test-particle numerical simulations of GeV-scale protons traversing a realistic magnetised wind of a young solar mass star with a superposed small-scale turbulence. The large-scale field is generated via an MHD model of a T Tauri wind, whereas the isotropic (Kolmogorov power spectrum) turbulent component is synthesised along the particles' trajectories. We have combined Chandra observations of T Tauri flares with solar flare scaling for describing the energetic particle spectrum. In contrast with previous models, we find that the disk ionization is dominated by X-rays except within narrow regions where the energetic particles are channelled onto the disk by the strongly tangled and turbulent field lines; the radial thickness of such regions broadens with the distance from the central star (5 stellar radii or more). In those regions, the disk ionization due to energetic particles can locally dominate the stellar X-rays, arguably, out to large distances (10, 100 AU) from the star.
NASA Astrophysics Data System (ADS)
Alexander, Jennifer Mary
Atmospheric mineral dust has a large impact on the earth's radiation balance and climate. The radiative effects of mineral dust depend on factors including, particle size, shape, and composition which can all be extremely complex. Mineral dust particles are typically irregular in shape and can include sharp edges, voids, and fine scale surface roughness. Particle shape can also depend on the type of mineral and can vary as a function of particle size. In addition, atmospheric mineral dust is a complex mixture of different minerals as well as other, possibly organic, components that have been mixed in while these particles are suspended in the atmosphere. Aerosol optical properties are investigated in this work, including studies of the effect of particle size, shape, and composition on the infrared (IR) extinction and visible scattering properties in order to achieve more accurate modeling methods. Studies of particle shape effects on dust optical properties for single component mineral samples of silicate clay and diatomaceous earth are carried out here first. Experimental measurements are modeled using T-matrix theory in a uniform spheroid approximation. Previous efforts to simulate the measured optical properties of silicate clay, using models that assumed particle shape was independent of particle size, have achieved only limited success. However, a model which accounts for a correlation between particle size and shape for the silicate clays offers a large improvement over earlier modeling approaches. Diatomaceous earth is also studied as an example of a single component mineral dust aerosol with extreme particle shapes. A particle shape distribution, determined by fitting the experimental IR extinction data, used as a basis for modeling the visible light scattering properties. While the visible simulations show only modestly good agreement with the scattering data, the fits are generally better than those obtained using more commonly invoked particle shape distributions. The next goal of this work is to investigate if modeling methods developed in the studies of single mineral components can be generalized to predict the optical properties of more authentic aerosol samples which are complex mixtures of different minerals. Samples of Saharan sand, Iowa loess, and Arizona road dust are used here as test cases. T-matrix based simulations of the authentic samples, using measured particle size distributions, empirical mineralogies, and a priori particle shape models for each mineral component are directly compared with the measured IR extinction spectra and visible scattering profiles. This modeling approach offers a significant improvement over more commonly applied models that ignore variations in particle shape with size or mineralogy and include only a moderate range of shape parameters. Mineral dust samples processed with organic acids and humic material are also studied in order to explore how the optical properties of dust can change after being aged in the atmosphere. Processed samples include quartz mixed with humic material, and calcite reacted with acetic and oxalic acid. Clear differences in the light scattering properties are observed for all three processed mineral dust samples when compared to the unprocessed mineral dust or organic salt products. These interactions result in both internal and external mixtures depending on the sample. In addition, the presence of these organic materials can alter the mineral dust particle shape. Overall, however, these results demonstrate the need to account for the effects of atmospheric aging of mineral dust on aerosol optical properties. Particle shape can also affect the aerodynamic properties of mineral dust aerosol. In order to account for these effects, the dynamic shape factor is used to give a measure of particle asphericity. Dynamic shape factors of quartz are measured by mass and mobility selecting particles and measuring their vacuum aerodynamic diameter. From this, dynamic shape factors in both the transition and vacuum regime can be derived. The measured dynamic shape factors of quartz agree quite well with the spheroidal shape distributions derived through studies of the optical properties.
Misconceptions of Selected Science Concepts Held by Elementary School Students
ERIC Educational Resources Information Center
Doran, Rodney L.
1972-01-01
Describes a test, administered as a motion picture, designed to measure misconceptions about the particle model of matter held by students in grades two through six. Reliability values for tests of eight misconceptions are given and the correlations of misconception scores with measures of IQ, reading, mathematics, and science ability reported.…
NASA Astrophysics Data System (ADS)
Raitoharju, Matti; Nurminen, Henri; Piché, Robert
2015-12-01
Indoor positioning based on wireless local area network (WLAN) signals is often enhanced using pedestrian dead reckoning (PDR) based on an inertial measurement unit. The state evolution model in PDR is usually nonlinear. We present a new linear state evolution model for PDR. In simulated-data and real-data tests of tightly coupled WLAN-PDR positioning, the positioning accuracy with this linear model is better than with the traditional models when the initial heading is not known, which is a common situation. The proposed method is computationally light and is also suitable for smoothing. Furthermore, we present modifications to WLAN positioning based on Gaussian coverage areas and show how a Kalman filter using the proposed model can be used for integrity monitoring and (re)initialization of a particle filter.
Exotic quarks in Twin Higgs models
Cheng, Hsin -Chia; Jung, Sunghoon; Salvioni, Ennio; ...
2016-03-14
The Twin Higgs model provides a natural theory for the electroweak symmetry breaking without the need of new particles carrying the standard model gauge charges below a few TeV. In the low energy theory, the only probe comes from the mixing of the Higgs fields in the standard model and twin sectors. However, an ultraviolet completion is required below ~ 10 TeV to remove residual logarithmic divergences. In non-supersymmetric completions, new exotic fermions charged under both the standard model and twin gauge symmetries have to be present to accompany the top quark, thus providing a high energy probe of themore » model. Some of them carry standard model color, and may therefore be copiously produced at current or future hadron colliders. Once produced, these exotic quarks can decay into a top together with twin sector particles. If the twin sector particles escape the detection, we have the irreducible stop-like signals. On the other hand, some twin sector particles may decay back into the standard model particles with long lifetimes, giving spectacular displaced vertex signals in combination with the prompt top quarks. This happens in the Fraternal Twin Higgs scenario with typical parameters, and sometimes is even necessary for cosmological reasons. We study the potential displaced vertex signals from the decays of the twin bottomonia, twin glueballs, and twin leptons in the Fraternal Twin Higgs scenario. As a result, depending on the details of the twin sector, the exotic quarks may be probed up to ~ 2.5 TeV at the LHC and beyond 10 TeV at a future 100 TeV collider, providing a strong test of this class of ultraviolet completions.« less
Predicting the particle size distribution of eroded sediment using artificial neural networks.
Lagos-Avid, María Paz; Bonilla, Carlos A
2017-03-01
Water erosion causes soil degradation and nonpoint pollution. Pollutants are primarily transported on the surfaces of fine soil and sediment particles. Several soil loss models and empirical equations have been developed for the size distribution estimation of the sediment leaving the field, including the physically-based models and empirical equations. Usually, physically-based models require a large amount of data, sometimes exceeding the amount of available data in the modeled area. Conversely, empirical equations do not always predict the sediment composition associated with individual events and may require data that are not always available. Therefore, the objective of this study was to develop a model to predict the particle size distribution (PSD) of eroded soil. A total of 41 erosion events from 21 soils were used. These data were compiled from previous studies. Correlation and multiple regression analyses were used to identify the main variables controlling sediment PSD. These variables were the particle size distribution in the soil matrix, the antecedent soil moisture condition, soil erodibility, and hillslope geometry. With these variables, an artificial neural network was calibrated using data from 29 events (r 2 =0.98, 0.97, and 0.86; for sand, silt, and clay in the sediment, respectively) and then validated and tested on 12 events (r 2 =0.74, 0.85, and 0.75; for sand, silt, and clay in the sediment, respectively). The artificial neural network was compared with three empirical models. The network presented better performance in predicting sediment PSD and differentiating rain-runoff events in the same soil. In addition to the quality of the particle distribution estimates, this model requires a small number of easily obtained variables, providing a convenient routine for predicting PSD in eroded sediment in other pollutant transport models. Copyright © 2017 Elsevier B.V. All rights reserved.
Zhang, Liping; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian
2014-06-01
In this paper, by using a particle swarm optimization algorithm to solve the optimal parameter estimation problem, an improved Nash nonlinear grey Bernoulli model termed PSO-NNGBM(1,1) is proposed. To test the forecasting performance, the optimized model is applied for forecasting the incidence of hepatitis B in Xinjiang, China. Four models, traditional GM(1,1), grey Verhulst model (GVM), original nonlinear grey Bernoulli model (NGBM(1,1)) and Holt-Winters exponential smoothing method, are also established for comparison with the proposed model under the criteria of mean absolute percentage error and root mean square percent error. The prediction results show that the optimized NNGBM(1,1) model is more accurate and performs better than the traditional GM(1,1), GVM, NGBM(1,1) and Holt-Winters exponential smoothing method. Copyright © 2014. Published by Elsevier Ltd.
Using Computer Simulations to Model Scoria Cone Growth
NASA Astrophysics Data System (ADS)
Bemis, K. G.; Mehta, R. D.
2016-12-01
Scoria cones form from the accumulation of scoria delivered by either bursting lava bubbles (Strombolian style eruptions) or the gas thrust of an eruption column (Hawaiian to sub-Plinian style eruption). In this study, we focus on connecting the distribution of scoria delivery to the eventual cone shape rather than the specifics of the mechanism of delivery. For simplicity, we choose to model ballistic paths, that follow the scoria from ejection from crater to landing on the surface and then avalanching down slope. The first stage corresponds to Strombolian-like bursts of the bubble. The second stage only occurs if the angle of repose is greater than 30 degrees. After this condition is met, the scoria particles grain flow downwards until a stable slope is formed. These two stages of the volcanic eruption repeat themselves in the number of phases. We hypothesize that the horizontal travel distance of the ballistic paths, and as a result the width of the volcano, is primarily dependent of the velocity of the particles bursting from the bubble in the crater. Other parameters that may affect the shape of cinder cones are air resistance on ballistic paths, ranges in particle size, ballistic ejection angles, and the total number of particles. Ejection velocity, ejection angle, particle size and air resistance control the delivery distribution of scoria; a similar distribution of scoria can be obtained by sedimentation from columns and the controlling parameters of such (gas thrust velocity, particle density, etc.) can be related to the ballistic delivery in terms of eruption energy and particle characteristics. We present a series of numerical experiments that test our hypotheses by varying different parameters one or more at a time in sets each designed to test a specific hypothesis. Volcano width increases as ejection velocity, ejection angle (measured from surface), or the total number of scoria particles increases. Ongoing investigations seek the controls on crater width.
NASA Astrophysics Data System (ADS)
Brown, Lloyd; Joyce, Peter; Radice, Joshua; Gregorian, Dro; Gobble, Michael
2012-07-01
Strain rate dependency of mechanical properties of tungsten carbide (WC)-filled bronze castings fabricated by centrifugal and sedimentation-casting techniques are examined, in this study. Both casting techniques are an attempt to produce a functionally graded material with high wear resistance at a chosen surface. Potential applications of such materials include shaft bushings, electrical contact surfaces, and brake rotors. Knowledge of strain rate-dependent mechanical properties is recommended for predicting component response due to dynamic loading or impact events. A brief overview of the casting techniques for the materials considered in this study is followed by an explanation of the test matrix and testing techniques. Hardness testing, density measurement, and determination of the volume fraction of WC particles are performed throughout the castings using both image analysis and optical microscopy. The effects of particle filling on mechanical properties are first evaluated through a microhardness survey of the castings. The volume fraction of WC particles is validated using a thorough density survey and a rule-of-mixtures model. Split Hopkinson Pressure Bar (SHPB) testing of various volume fraction specimens is conducted to determine strain dependence of mechanical properties and to compare the process-property relationships between the two casting techniques. The baseline performances of C95400 bronze are provided for comparison. The results show that the addition of WC particles improves microhardness significantly for the centrifugally cast specimens, and, to a lesser extent, in the sedimentation-cast specimens, largely because the WC particles are more concentrated as a result of the centrifugal-casting process. Both metal matrix composites (MMCs) demonstrate strain rate dependency, with sedimentation casting having a greater, but variable, effects on material response. This difference is attributed to legacy effects from the casting process, namely, porosity and localized WC particle grouping.
Discrete particle modeling and micromechanical characterization of bilayer tablet compaction.
Yohannes, B; Gonzalez, M; Abebe, A; Sprockel, O; Nikfar, F; Kiang, S; Cuitiño, A M
2017-08-30
A mechanistic particle scale model is proposed for bilayer tablet compaction. Making bilayer tablets involves the application of first layer compaction pressure on the first layer powder and a second layer compaction pressure on entire powder bed. The bonding formed between the first layer and the second layer particles is crucial for the mechanical strength of the bilayer tablet. The bonding and the contact forces between particles of the first layer and second layer are affected by the deformation and rearrangement of particles due to the compaction pressures. Our model takes into consideration the elastic and plastic deformations of the first layer particles due to the first layer compaction pressure, in addition to the mechanical and physical properties of the particles. Using this model, bilayer tablets with layers of the same material and different materials, which are commonly used pharmaceutical powders, are tested. The simulations show that the strength of the layer interface becomes weaker than the strength of the two layers as the first layer compaction pressure is increased. The reduction of strength at the layer interface is related to reduction of the first layer surface roughness. The reduced roughness decreases the available bonding area and hence reduces the mechanical strength at the interface. In addition, the simulations show that at higher first layer compaction pressure the bonding area is significantly less than the total contact area at the layer interface. At the interface itself, there is a non-monotonic relationship between the bonding area and first layer force. The bonding area at the interface first increases and then decreases as the first layer pressure is increased. These results are in agreement with findings of previous experimental studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Pitch Angle Dependence of Drift Resonant Ions Observed by the Van Allen Probes
NASA Astrophysics Data System (ADS)
Rankin, R.; Wang, C.; Wang, Y.; Zong, Q. G.; Zhou, X.
2017-12-01
Acceleration and modulation of ring current ions by poloidal mode ULF waves is investigated. A simplified MHD model of ULF waves in a dipole magnetic field is presented that includes phase mixing to perpendicular scales determined by the ionospheric Pedersen conductivity. The wave model is combined with a full Lorentz force test particle code to study drift and drift bounce resonance wave-particle interactions. Ion trajectories are traced backward-in-time to an assumed form of the distribution function, and Liouville's method is used to reconstruct the phase space density response (PSD) poloidal mode waves observed by the Van Allen Probes. In spite of its apparent simplicity, simulations using the wave and test particle models are able to explain the acceleration of ions and energy dispersion observed by the Van Allen Probes. The paper focuses on the pitch angle evolution of the initial PSD as it responds to the action of ULF waves. An interesting aspect of the study is the formation of butterfly ion distributions as ions make periodic radial oscillations across L. Ions become trapped in an effective potential well across a limited range of L and follow trajectories that cause them to surf along constant phase fronts. The impications of this new trapping mechanism for both ions and electrons is discussed.
Numerical Simulation of Dry Granular Flow Impacting a Rigid Wall Using the Discrete Element Method
Wu, Fengyuan; Fan, Yunyun; Liang, Li; Wang, Chao
2016-01-01
This paper presents a clump model based on Discrete Element Method. The clump model was more close to the real particle than a spherical particle. Numerical simulations of several tests of dry granular flow impacting a rigid wall flowing in an inclined chute have been achieved. Five clump models with different sphericity have been used in the simulations. By comparing the simulation results with the experimental results of normal force on the rigid wall, a clump model with better sphericity was selected to complete the following numerical simulation analysis and discussion. The calculation results of normal force showed good agreement with the experimental results, which verify the effectiveness of the clump model. Then, total normal force and bending moment of the rigid wall and motion process of the granular flow were further analyzed. Finally, comparison analysis of the numerical simulations using the clump model with different grain composition was obtained. By observing normal force on the rigid wall and distribution of particle size at the front of the rigid wall at the final state, the effect of grain composition on the force of the rigid wall has been revealed. It mainly showed that, with the increase of the particle size, the peak force at the retaining wall also increase. The result can provide a basis for the research of relevant disaster and the design of protective structures. PMID:27513661
Training manuals for nondestructive testing using magnetic particles
NASA Technical Reports Server (NTRS)
1968-01-01
Training manuals containing the fundamentals of nondestructive testing using magnetic particle as detection media are used by metal parts inspectors and quality assurance specialists. Magnetic particle testing involves magnetization of the test specimen, application of the magnetic particle and interpretation of the patterns formed.
NASA Astrophysics Data System (ADS)
Alves, S. G.; Martins, M. L.
2010-09-01
Aggregation of animal cells in culture comprises a series of motility, collision and adhesion processes of basic relevance for tissue engineering, bioseparations, oncology research and in vitro drug testing. In the present paper, a cluster-cluster aggregation model with stochastic particle replication and chemotactically driven motility is investigated as a model for the growth of animal cells in culture. The focus is on the scaling laws governing the aggregation kinetics. Our simulations reveal that in the absence of chemotaxy the mean cluster size and the total number of clusters scale in time as stretched exponentials dependent on the particle replication rate. Also, the dynamical cluster size distribution functions are represented by a scaling relation in which the scaling function involves a stretched exponential of the time. The introduction of chemoattraction among the particles leads to distribution functions decaying as power laws with exponents that decrease in time. The fractal dimensions and size distributions of the simulated clusters are qualitatively discussed in terms of those determined experimentally for several normal and tumoral cell lines growing in culture. It is shown that particle replication and chemotaxy account for the simplest cluster size distributions of cellular aggregates observed in culture.
Mobility Research and Development (Briefing charts)
2016-03-17
Mobility Research & Development Dr. Paramsothy Jayakumar, STE, Analytics Tank Automotive Research, Development and Engineering Center Research...Mobility Model (NRMM) • Dr. M. G. Bekker of TARDEC is the “Father of Terrain-Vehicle Systems ” • NRMM was developed in 1960-70 by TARDEC and ERDC...Blocks: Scaled Experiments Particle Image Velocimetry Pressure – Sinkage Test Direct Shear Test Simulations Single Wheel Test Plate width = 50 mm
NASA Astrophysics Data System (ADS)
Filser, Juliane; Arndt, Darius; Baumann, Jonas; Geppert, Mark; Hackmann, Stephan; Luther, Eva M.; Pade, Christian; Prenzel, Katrin; Wigger, Henning; Arning, Jürgen; Hohnholt, Michaela C.; Köser, Jan; Kück, Andrea; Lesnikov, Elena; Neumann, Jennifer; Schütrumpf, Simon; Warrelmann, Jürgen; Bäumer, Marcus; Dringen, Ralf; von Gleich, Arnim; Swiderek, Petra; Thöming, Jorg
2013-01-01
Iron oxide nanoparticles (IONP) are currently being studied as green magnet resonance imaging (MRI) contrast agents. They are also used in huge quantities for environmental remediation and water treatment purposes, although very little is known on the consequences of such applications for organisms and ecosystems. In order to address these questions, we synthesised polyvinylpyrrolidone-coated IONP, characterised the particle dispersion in various media and investigated the consequences of an IONP exposure using an array of biochemical and biological assays. Several theoretical approaches complemented the measurements. In aqueous dispersion IONP had an average hydrodynamic diameter of 25 nm and were stable over six days in most test media, which could also be predicted by stability modelling. The particles were tested in concentrations of up to 100 mg Fe per L. The activity of the enzymes glutathione reductase and acetylcholine esterase was not affected, nor were proliferation, morphology or vitality of mammalian OLN-93 cells although exposure of the cells to 100 mg Fe per L increased the cellular iron content substantially. Only at this concentration, acute toxicity tests with the freshwater flea Daphnia magna revealed slightly, yet insignificantly increased mortality. Two fundamentally different bacterial assays, anaerobic activated sludge bacteria inhibition and a modified sediment contact test with Arthrobacter globiformis, both rendered results contrary to the other assays: at the lowest test concentration (1 mg Fe per L), IONP caused a pronounced inhibition whereas higher concentrations were not effective or even stimulating. Preliminary and prospective risk assessment was exemplified by comparing the application of IONP with gadolinium-based nanoparticles as MRI contrast agents. Predicted environmental concentrations were modelled in two different scenarios, showing that IONP could reduce the environmental exposure of toxic Gd-based particles by more than 50%. Application of the Swiss ``Precautionary Matrix for Synthetic Nanomaterials'' rendered a low precautionary need for using our IONP as MRI agents and a higher one when using them for remediation or water treatment. Since IONP and (considerably more reactive) zerovalent iron nanoparticles are being used in huge quantities for environmental remediation purposes, it has to be ascertained that these particles pose no risk to either human health or to the environment.Iron oxide nanoparticles (IONP) are currently being studied as green magnet resonance imaging (MRI) contrast agents. They are also used in huge quantities for environmental remediation and water treatment purposes, although very little is known on the consequences of such applications for organisms and ecosystems. In order to address these questions, we synthesised polyvinylpyrrolidone-coated IONP, characterised the particle dispersion in various media and investigated the consequences of an IONP exposure using an array of biochemical and biological assays. Several theoretical approaches complemented the measurements. In aqueous dispersion IONP had an average hydrodynamic diameter of 25 nm and were stable over six days in most test media, which could also be predicted by stability modelling. The particles were tested in concentrations of up to 100 mg Fe per L. The activity of the enzymes glutathione reductase and acetylcholine esterase was not affected, nor were proliferation, morphology or vitality of mammalian OLN-93 cells although exposure of the cells to 100 mg Fe per L increased the cellular iron content substantially. Only at this concentration, acute toxicity tests with the freshwater flea Daphnia magna revealed slightly, yet insignificantly increased mortality. Two fundamentally different bacterial assays, anaerobic activated sludge bacteria inhibition and a modified sediment contact test with Arthrobacter globiformis, both rendered results contrary to the other assays: at the lowest test concentration (1 mg Fe per L), IONP caused a pronounced inhibition whereas higher concentrations were not effective or even stimulating. Preliminary and prospective risk assessment was exemplified by comparing the application of IONP with gadolinium-based nanoparticles as MRI contrast agents. Predicted environmental concentrations were modelled in two different scenarios, showing that IONP could reduce the environmental exposure of toxic Gd-based particles by more than 50%. Application of the Swiss ``Precautionary Matrix for Synthetic Nanomaterials'' rendered a low precautionary need for using our IONP as MRI agents and a higher one when using them for remediation or water treatment. Since IONP and (considerably more reactive) zerovalent iron nanoparticles are being used in huge quantities for environmental remediation purposes, it has to be ascertained that these particles pose no risk to either human health or to the environment. Electronic supplementary information (ESI) available: Full experimental methods, additional results (Tables S1-S6, Fig. S1) and an extended background literature. See DOI: 10.1039/c2nr31652h
NASA Astrophysics Data System (ADS)
Choudhury, Sayantan; Panda, Sudhakar; Singh, Rajeev
2017-02-01
In this work, we have studied the possibility of setting up Bell's inequality violating experiment in the context of cosmology, based on the basic principles of quantum mechanics. First we start with the physical motivation of implementing the Bell inequality violation in the context of cosmology. Then to set up the cosmological Bell violating test experiment we introduce a model independent theoretical framework using which we have studied the creation of new massive particles by implementing the WKB approximation method for the scalar fluctuations in the presence of additional time-dependent mass contribution in the cosmological perturbation theory. Here for completeness we compute the total number density and the energy density of the newly created particles in terms of the Bogoliubov coefficients using the WKB approximation method. Next using the background scalar fluctuation in the presence of a new time-dependent mass contribution, we explicitly compute the expression for the one point and two point correlation functions. Furthermore, using the results for a one point function we introduce a new theoretical cosmological parameter which can be expressed in terms of the other known inflationary observables and can also be treated as a future theoretical probe to break the degeneracy amongst various models of inflation. Additionally, we also fix the scale of inflation in a model-independent way without any prior knowledge of primordial gravitational waves. Also using the input from a newly introduced cosmological parameter, we finally give a theoretical estimate for the tensor-to-scalar ratio in a model-independent way. Next, we also comment on the technicalities of measurements from isospin breaking interactions and the future prospects of newly introduced massive particles in a cosmological Bell violating test experiment. Further, we cite a precise example of this setup applicable in the context of string theory motivated axion monodromy model. Then we comment on the explicit role of the decoherence effect and high spin on cosmological Bell violating test experiment. Finally, we provide a theoretical bound on the heavy particle mass parameter for scalar fields, gravitons and other high spin fields from our proposed setup.
High Pressure Quick Disconnect Particle Impact Tests
NASA Technical Reports Server (NTRS)
Peralta, Stephen; Rosales, Keisa; Smith, Sarah R.; Stoltzfus, Joel M.
2007-01-01
To determine whether there is a particle impact ignition hazard in the quick disconnects (QDs) in the Environmental Control and Life Support System (ECLSS) on the International Space Station (ISS), NASA Johnson Space Center requested White Sands Test Facility (WSTF) to perform particle impact testing. Testing was performed from November 2006 through May 2007 and included standard supersonic and subsonic particle impact tests on 15-5 PH stainless steel, as well as tests performed on a QD simulator. This report summarizes the particle impact tests completed at WSTF. Although there was an ignition in Test Series 4, it was determined the ignition was caused by the presence of a machining imperfection. The sum of all the test results indicates that there is no particle impact ignition hazard in the ISS ECLSS QDs.
Tao, Hui; Steel, John; Lowen, Anice C.
2015-01-01
A high particle to infectivity ratio is a feature common to many RNA viruses, with ~90–99% of particles unable to initiate a productive infection under low multiplicity conditions. A recent publication by Brooke et al. revealed that, for influenza A virus (IAV), a proportion of these seemingly non-infectious particles are in fact semi-infectious. Semi-infectious (SI) particles deliver an incomplete set of viral genes to the cell, and therefore cannot support a full cycle of replication unless complemented through co-infection. In addition to SI particles, IAV populations often contain defective-interfering (DI) particles, which actively interfere with production of infectious progeny. With the aim of understanding the significance to viral evolution of these incomplete particles, we tested the hypothesis that SI and DI particles promote diversification through reassortment. Our approach combined computational simulations with experimental determination of infection, co-infection and reassortment levels following co-inoculation of cultured cells with two distinct influenza A/Panama/2007/99 (H3N2)-based viruses. Computational results predicted enhanced reassortment at a given % infection or multiplicity of infection with increasing semi-infectious particle content. Comparison of experimental data to the model indicated that the likelihood that a given segment is missing varies among the segments and that most particles fail to deliver ≥1 segment. To verify the prediction that SI particles augment reassortment, we performed co-infections using viruses exposed to low dose UV. As expected, the introduction of semi-infectious particles with UV-induced lesions enhanced reassortment. In contrast to SI particles, inclusion of DI particles in modeled virus populations could not account for observed reassortment outcomes. DI particles were furthermore found experimentally to suppress detectable reassortment, relative to that seen with standard virus stocks, most likely by interfering with production of infectious progeny from co-infected cells. These data indicate that semi-infectious particles increase the rate of reassortment and may therefore accelerate adaptive evolution of IAV. PMID:26440404
Solid particle erosion mechanisms of protective coatings for aerospace applications
NASA Astrophysics Data System (ADS)
Bousser, Etienne
The main objective of this PhD project is to investigate the material loss mechanisms during Solid Particle Erosion (SPE) of hard protective coatings, including nanocomposite and nanostructured systems. In addition, because of the complex nature of SPE mechanisms, rigorous testing methodologies need to be employed and the effects of all testing parameters need to be fully understood. In this PhD project, the importance of testing methodology is addressed throughout in order to effectively study the SPE mechanisms of brittle materials and coatings. In the initial stage of this thesis, we studied the effect of the addition of silicon (Si) on the microstructure, mechanical properties and, more specifically, on the SPE resistance of thick CrN-based coatings. It was found that the addition of Si significantly improved the erosion resistance and that SPE correlated with the microhardness values, i.e. the coating with the highest microhardness also had the lowest erosion rate (ER). In fact, the ERs showed a much higher dependence on the surface hardness than what has been proposed for brittle erosion mechanisms. In the first article, we study the effects of the particle properties on the SPE behavior of six brittle bulk materials using glass and alumina powders. First, we apply a robust methodology to accurately characterize the elasto-plastic and fracture properties of the studied materials. We then correlate the measured ER to materials' parameters with the help of a morphological study and an analysis of the quasi-static elasto-plastic erosion models. Finally, in order to understand the effects of impact on the particles themselves and to support the energy dissipation-based model proposed here, we study the particle size distributions of the powders before and after erosion testing. It is shown that tests using both powders lead to a material loss mechanism related to lateral fracture, that the higher than predicted velocity exponents point towards a velocity-dependent damage accumulation mechanism correlated to target yield pressure, and that damage accumulation effects are more pronounced for the softer glass powder because of kinetic energy dissipation through different means. In the second article, we study the erosion mechanisms for several hard coatings deposited by pulsed DC magnetron sputtering. We first validate a new methodology for the accurate measurement of volume loss, and we show the importance of optimizing the testing parameters in order to obtain results free from experimental artefacts. We then correlate the measured ERs to the material parameters measured by depth-sensing indentation. In order to understand the material loss mechanisms, we study three of the coating systems in greater detail with the help of fracture characterization and a morphological study of the eroded surfaces. Finally, we study the particle size distributions of the powders before and after erosion testing in an effort to understand the role of particle fracture. We demonstrate that the measured ERs of the coatings are strongly dependent on the target hardness and do not correlate with coating toughness. In fact, the material removal mechanism is found to occur through repeated ductile indentation and cutting of the surface by the impacting particles and that particle breakup is not sufficiently large to influence the results significantly. Studying SPE mechanisms of hard protective coating systems in detail has proven to be quite challenging in the past, given that conventional SPE testing is notoriously inaccurate due to its aggressive nature and its many methodological uncertainties. In the third article, we present a novel in situ real-time erosion testing methodology using a quartz crystal microbalance, developed in order to study the SPE process of hard protective coating systems. Using conventional mass loss SPE testing, we validate and discuss the advantages and challenges related to such a method. In addition, this time-resolved technique enables us to discuss some transient events present during SPE testing of hard coating systems leading to new insights into the erosion process. (Abstract shortened by UMI.)
Heavy particle transport in sputtering systems
NASA Astrophysics Data System (ADS)
Trieschmann, Jan
2015-09-01
This contribution aims to discuss the theoretical background of heavy particle transport in plasma sputtering systems such as direct current magnetron sputtering (dcMS), high power impulse magnetron sputtering (HiPIMS), or multi frequency capacitively coupled plasmas (MFCCP). Due to inherently low process pressures below one Pa only kinetic simulation models are suitable. In this work a model appropriate for the description of the transport of film forming particles sputtered of a target material has been devised within the frame of the OpenFOAM software (specifically dsmcFoam). The three dimensional model comprises of ejection of sputtered particles into the reactor chamber, their collisional transport through the volume, as well as deposition of the latter onto the surrounding surfaces (i.e. substrates, walls). An angular dependent Thompson energy distribution fitted to results from Monte-Carlo simulations is assumed initially. Binary collisions are treated via the M1 collision model, a modified variable hard sphere (VHS) model. The dynamics of sputtered and background gas species can be resolved self-consistently following the direct simulation Monte-Carlo (DSMC) approach or, whenever possible, simplified based on the test particle method (TPM) with the assumption of a constant, non-stationary background at a given temperature. At the example of an MFCCP research reactor the transport of sputtered aluminum is specifically discussed. For the peculiar configuration and under typical process conditions with argon as process gas the transport of aluminum sputtered of a circular target is shown to be governed by a one dimensional interaction of the imposed and backscattered particle fluxes. The results are analyzed and discussed on the basis of the obtained velocity distribution functions (VDF). This work is supported by the German Research Foundation (DFG) in the frame of the Collaborative Research Centre TRR 87.
Kormány, Róbert; Fekete, Jenő; Guillarme, Davy; Fekete, Szabolcs
2014-02-01
The goal of this study was to evaluate the accuracy of simulated robustness testing using commercial modelling software (DryLab) and state-of-the-art stationary phases. For this purpose, a mixture of amlodipine and its seven related impurities was analyzed on short narrow bore columns (50×2.1mm, packed with sub-2μm particles) providing short analysis times. The performance of commercial modelling software for robustness testing was systematically compared to experimental measurements and DoE based predictions. We have demonstrated that the reliability of predictions was good, since the predicted retention times and resolutions were in good agreement with the experimental ones at the edges of the design space. In average, the retention time relative errors were <1.0%, while the predicted critical resolution errors were comprised between 6.9 and 17.2%. Because the simulated robustness testing requires significantly less experimental work than the DoE based predictions, we think that robustness could now be investigated in the early stage of method development. Moreover, the column interchangeability, which is also an important part of robustness testing, was investigated considering five different C8 and C18 columns packed with sub-2μm particles. Again, thanks to modelling software, we proved that the separation was feasible on all columns within the same analysis time (less than 4min), by proper adjustments of variables. Copyright © 2013 Elsevier B.V. All rights reserved.
Rouhi Youssefi, Mehrnaz; Diez, Francisco Javier
2016-03-01
The influence of a high electric field applied on both fluid flow and particle velocities is quantified at large Peclet numbers. The experiments involved simultaneous particle image velocimetry and flow rate measurements. These are conducted in polydimethylsiloxane channels with spherical nonconducting polystyrene particles and DI water as the background flow. The high electric field tests produced up to three orders of magnitude higher electrokinetic velocities than any previous reports. The maximum electroosmotic velocity and electrophoretic velocity measured were 3.55 and 2.3 m/s. Electrophoretic velocities are measured over the range of 100 V/cm < E < 250 000 V/cm. The results are separated according to the different nonlinear theoretical models, including low and high Peclet numbers, and weak and strong concentration polarization. They show good agreement with the models. Such fast velocities could be used for flow separation, mixing, transport, control, and manipulation of suspended particles as well as microthrust generation among other applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Testing a spin-2 mediator by angular observables in b →s μ+μ-
NASA Astrophysics Data System (ADS)
Fajfer, Svjetlana; Melić, Blaženka; Patra, Monalisa
2018-05-01
We consider the effects of the spin-2 particle in the b →s μ+μ- transition assuming that the spin-2 particle couples in a flavor-nonuniversal way to b and s quarks and in the leptonic sector couples only to the muons, thereby only contributing to the process b →s μ+μ-. The Bs-B¯s transition gives the strong constraint on the coupling of the spin-2 mediator and b and s quarks, while the observed discrepancy from the standard model prediction for the muon anomalous magnetic moment (g -2 )μ serves to constrain the μ coupling to a spin-2 particle. We find that the spin-2 particle can modify the angular observables in the B →K μ+μ- and B →K*μ+μ- decays and produce effects that do not exist in the standard model. The generated forward-backward asymmetries in these processes can reach 15%, while other observables for these decays receive tiny effects.
A Study of the Effects of Relative Humidity on Small Particle Adhesion to Surfaces
NASA Technical Reports Server (NTRS)
Whitfield, W. J.; David, T.
1971-01-01
Ambient dust ranging in size from less than one micron up to 140 microns was used as test particles. Relative humidities of 33% to 100% were used to condition test surfaces after loading with the test particles. A 20 psi nitrogen blowoff was used as the removal mechanism to test for particle adhesion. Particles were counted before and after blowoff to determine retention characteristics. Particle adhesion increased drastically as relative humidity increased above 50%. The greatest adhesion changes occurred within the first hour of conditioning time. Data are presented for total particle adhesion, for particles 10 microns and larger, and 50 microns and larger.
Plasmapause Location: Model Compared to Van Allen Probes Observations
NASA Astrophysics Data System (ADS)
Goldstein, J.; Baker, D. N.; Blake, J. B.; Funsten, H. O.; Jaynes, A. N.; Malaspina, D.; Reeves, G. D.; Spence, H. E.; Thaller, S. A.; Wygant, J. R.
2017-12-01
We study the evolution of the plasmapause for a multi-year period (January 2013 to January 2017) spanning much of the Van Allen Probes mission, by comparing the output of a plasmapause test particle simulation with the spacecraft potential measured by the Electric Field and Waves (EFW) suite. Consistent with previous results, we quantify the accuracy of the model by measuring the radial difference between real and virtual satellite encounters with the plasmapause boundary. We find that model performance is better on the nightside and during active periods, and worse on the duskside/dayside and during extended quiet intervals. For two case studies, we compare the plasmapause with the locations of relativistic electron flux peaks. For global context we use the test particle plasmaspheric index Fp [Goldstein et al., 2016], the fraction of a circular drift orbit inside the plasmapause, as a proxy for the globally integrated opportunity for losses in cold plasma. We find an inverse relationship between relativistic flux and the Fp index, consistent with increased likelihood of losses in cold plasma.
An automatic, stagnation point based algorithm for the delineation of Wellhead Protection Areas
NASA Astrophysics Data System (ADS)
Tosco, Tiziana; Sethi, Rajandrea; di Molfetta, Antonio
2008-07-01
Time-related capture areas are usually delineated using the backward particle tracking method, releasing circles of equally spaced particles around each well. In this way, an accurate delineation often requires both a very high number of particles and a manual capture zone encirclement. The aim of this work was to propose an Automatic Protection Area (APA) delineation algorithm, which can be coupled with any model of flow and particle tracking. The computational time is here reduced, thanks to the use of a limited number of nonequally spaced particles. The particle starting positions are determined coupling forward particle tracking from the stagnation point, and backward particle tracking from the pumping well. The pathlines are postprocessed for a completely automatic delineation of closed perimeters of time-related capture zones. The APA algorithm was tested for a two-dimensional geometry, in homogeneous and nonhomogeneous aquifers, steady state flow conditions, single and multiple wells. Results show that the APA algorithm is robust and able to automatically and accurately reconstruct protection areas with a very small number of particles, also in complex scenarios.
Genus Topology of Structure in the Sloan Digital Sky Survey: Model Testing
NASA Astrophysics Data System (ADS)
Gott, J. Richard, III; Hambrick, D. Clay; Vogeley, Michael S.; Kim, Juhan; Park, Changbom; Choi, Yun-Young; Cen, Renyue; Ostriker, Jeremiah P.; Nagamine, Kentaro
2008-03-01
We measure the three-dimensional topology of large-scale structure in the Sloan Digital Sky Survey (SDSS). This allows the genus statistic to be measured with unprecedented statistical accuracy. The sample size is now sufficiently large to allow the topology to be an important tool for testing galaxy formation models. For comparison, we make mock SDSS samples using several state-of-the-art N-body simulations: the Millennium run of Springel et al. (10 billion particles), the Kim & Park CDM models (1.1 billion particles), and the Cen & Ostriker hydrodynamic code models (8.6 billion cell hydro mesh). Each of these simulations uses a different method for modeling galaxy formation. The SDSS data show a genus curve that is broadly characteristic of that produced by Gaussian random-phase initial conditions. Thus, the data strongly support the standard model of inflation where Gaussian random-phase initial conditions are produced by random quantum fluctuations in the early universe. But on top of this general shape there are measurable differences produced by nonlinear gravitational effects and biasing connected with galaxy formation. The N-body simulations have been tuned to reproduce the power spectrum and multiplicity function but not topology, so topology is an acid test for these models. The data show a "meatball" shift (only partly due to the Sloan Great Wall of galaxies) that differs at the 2.5 σ level from the results of the Millenium run and the Kim & Park dark halo models, even including the effects of cosmic variance.
NASA Technical Reports Server (NTRS)
Palaszewski, Bryan
2005-01-01
This report presents particle formation observations and detailed analyses of the images from experiments that were conducted on the formation of solid hydrogen particles in liquid helium. Hydrogen was frozen into particles in liquid helium, and observed with a video camera. The solid hydrogen particle sizes and the total mass of hydrogen particles were estimated. These newly analyzed data are from the test series held on February 28, 2001. Particle sizes from previous testing in 1999 and the testing in 2001 were similar. Though the 2001 testing created similar particles sizes, many new particle formation phenomena were observed: microparticles and delayed particle formation. These experiment image analyses are some of the first steps toward visually characterizing these particles, and they allow designers to understand what issues must be addressed in atomic propellant feed system designs for future aerospace vehicles.
NASA Astrophysics Data System (ADS)
Yin, Yan; Chen, Qian; Jin, Lianji; Chen, Baojun; Zhu, Shichao; Zhang, Xiaopei
2012-11-01
A cloud resolving model coupled with a spectral bin microphysical scheme was used to investigate the effects of deep convection on the concentration and size distribution of aerosol particles within the upper troposphere. A deep convective storm that occurred on 1 December, 2005 in Darwin, Australia was simulated, and was compared with available radar observations. The results showed that the radar echo of the storm in the developing stage was well reproduced by the model. Sensitivity tests for aerosol layers at different altitudes were conducted in order to understand how the concentration and size distribution of aerosol particles within the upper troposphere can be influenced by the vertical transport of aerosols as a result of deep convection. The results indicated that aerosols originating from the boundary layer can be more efficiently transported upward, as compared to those from the mid-troposphere, due to significantly increased vertical velocity through the reinforced homogeneous freezing of droplets. Precipitation increased when aerosol layers were lofted at different altitudes, except for the case where an aerosol layer appeared at 5.4-8.0 km, in which relatively more efficient heterogeneous ice nucleation and subsequent Wegener-Bergeron-Findeisen process resulted in more pronounced production of ice crystals, and prohibited the formation of graupel particles via accretion. Sensitivity tests revealed, at least for the cases considered, that the concentration of aerosol particles within the upper troposphere increased by a factor of 7.71, 5.36, and 5.16, respectively, when enhanced aerosol layers existed at 0-2.2 km, 2.2-5.4 km, and 5.4-8.0 km, with Aitken mode and a portion of accumulation mode (0.1-0.2μm) particles being the most susceptible to upward transport.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohno, Kazumasa; Okuzumi, Satoshi
A number of transiting exoplanets have featureless transmission spectra that might suggest the presence of clouds at high altitudes. A realistic cloud model is necessary to understand the atmospheric conditions under which such high-altitude clouds can form. In this study, we present a new cloud model that takes into account the microphysics of both condensation and coalescence. Our model provides the vertical profiles of the size and density of cloud and rain particles in an updraft for a given set of physical parameters, including the updraft velocity and the number density of cloud condensation nuclei (CCNs). We test our modelmore » by comparing with observations of trade-wind cumuli on Earth and ammonia ice clouds in Jupiter. For trade-wind cumuli, the model including both condensation and coalescence gives predictions that are consistent with observations, while the model including only condensation overestimates the mass density of cloud droplets by up to an order of magnitude. For Jovian ammonia clouds, the condensation–coalescence model simultaneously reproduces the effective particle radius, cloud optical thickness, and cloud geometric thickness inferred from Voyager observations if the updraft velocity and CCN number density are taken to be consistent with the results of moist convection simulations and Galileo probe measurements, respectively. These results suggest that the coalescence of condensate particles is important not only in terrestrial water clouds but also in Jovian ice clouds. Our model will be useful to understand how the dynamics, compositions, and nucleation processes in exoplanetary atmospheres affect the vertical extent and optical thickness of exoplanetary clouds via cloud microphysics.« less
Informing Selection of Nanomaterial Concentrations for ...
Little justification is generally provided for selection of in vitro assay testing concentrations for engineered nanomaterials (ENMs). Selection of concentration levels for hazard evaluation based on real-world exposure scenarios is desirable. We reviewed published ENM concentrations measured in air in manufacturing and R&D labs to identify input levels for estimating ENM mass retained in the human lung using the Multiple-Path Particle Dosimetry (MPPD) model. Model input parameters were individually varied to estimate alveolar mass retained for different particle sizes (5-1000 nm), aerosol concentrations (0.1, 1 mg/m3), aspect ratios (2, 4, 10, 167), and exposure durations (24 hours and a working lifetime). The calculated lung surface concentrations were then converted to in vitro solution concentrations. Modeled alveolar mass retained after 24 hours is most affected by activity level and aerosol concentration. Alveolar retention for Ag and TiO2 nanoparticles and CNTs for a working lifetime (45 years) exposure duration is similar to high-end concentrations (~ 30-400 μg/mL) typical of in vitro testing reported in the literature. Analyses performed are generally applicable to provide ENM testing concentrations for in vitro hazard screening studies though further research is needed to improve the approach. Understanding the relationship between potential real-world exposures and in vitro test concentrations will facilitate interpretation of toxicological results
The Use of Tooth Particles as a Biomaterial in Post-Extraction Sockets. Experimental Study in Dogs.
Calvo-Guirado, José Luis; Maté-Sánchez de Val, José Eduardo; Ramos-Oltra, María Luisa; Pérez-Albacete Martínez, Carlos; Ramírez-Fernández, María Piedad; Maiquez-Gosálvez, Manuel; Gehrke, Sergio A; Fernández-Domínguez, Manuel; Romanos, Georgios E; Delgado-Ruiz, Rafael Arcesio
2018-05-06
Objectives : The objective of this study was to evaluate new bone formation derived from freshly crushed extracted teeth, grafted immediately in post-extraction sites in an animal model, compared with sites without graft filling, evaluated at 30 and 90 days. Material and Methods : The bilateral premolars P2, P3, P4 and the first mandibular molar were extracted atraumatically from six Beagle dogs. The clean, dry teeth were ground immediately using the Smart Dentin Grinder. The tooth particles obtained were subsequently sieved through a special sorting filter into two compartments; the upper container isolating particles over 1200 μm, the lower container isolated particles over 300 μm. The crushed teeth were grafted into the post-extraction sockets at P3, P4 and M1 (test group) (larger and smaller post-extraction alveoli), while P2 sites were left unfilled and acted as a control group. Tissue healing and bone formation were evaluated by histological and histomorphometric analysis after 30 and 90 days. Results : At 30 days, test site bone formation was greater in the test group than the control group ( p < 0.05); less immature bone was observed in the test group (25.71%) than the control group (55.98%). At 90 days, significant differences in bone formation were found with more in the test group than the control group. No significant differences were found in new bone formation when comparing the small and large alveoli post-extraction sites. Conclusions : Tooth particles extracted from dog’s teeth, grafted immediately after extractions can be considered a suitable biomaterial for socket preservation.
NASA Technical Reports Server (NTRS)
Smith, J. L.
1983-01-01
Existing techniques were surveyed, an experimental procedure was developed, a laboratory test model was fabricated, limited data were recovered for proof of principle, and the relationship between particle size distribution and amplitude measurements was illustrated in an effort to develop a low cost, simplified optical technique for measuring particle size distributions and velocities in fluidized bed combustors and gasifiers. A He-Ne laser illuminated Rochi Rulings (range 10 to 500 lines per inch). Various samples of known particle size distributions were passed through the fringe pattern produced by the rulings. A photomultiplier tube converted light from the fringe volume to an electrical signal which was recorded using an oscilloscope and camera. The signal amplitudes were correlated against the known particle size distributions. The correlation holds true for various samples.
Innermost stable circular orbit of spinning particle in charged spinning black hole background
NASA Astrophysics Data System (ADS)
Zhang, Yu-Peng; Wei, Shao-Wen; Guo, Wen-Di; Sui, Tao-Tao; Liu, Yu-Xiao
2018-04-01
In this paper we investigate the innermost stable circular orbit (ISCO) (spin-aligned or anti-aligned orbit) for a classical spinning test particle with the pole-dipole approximation in the background of Kerr-Newman black hole in the equatorial plane. It is shown that the orbit of the spinning particle is related to the spin of the test particle. The motion of the spinning test particle will be superluminal if its spin is too large. We give an additional condition by considering the superluminal constraint for the ISCO in the black hole backgrounds. We obtain numerically the relations between the ISCO and the properties of the black holes and the test particle. It is found that the radius of the ISCO for a spinning test particle is smaller than that of a nonspinning test particle in the black hole backgrounds.
Particle-based solid for nonsmooth multidomain dynamics
NASA Astrophysics Data System (ADS)
Nordberg, John; Servin, Martin
2018-04-01
A method for simulation of elastoplastic solids in multibody systems with nonsmooth and multidomain dynamics is developed. The solid is discretised into pseudo-particles using the meshfree moving least squares method for computing the strain tensor. The particle's strain and stress tensor variables are mapped to a compliant deformation constraint. The discretised solid model thus fit a unified framework for nonsmooth multidomain dynamics simulations including rigid multibodies with complex kinematic constraints such as articulation joints, unilateral contacts with dry friction, drivelines, and hydraulics. The nonsmooth formulation allows for impact impulses to propagate instantly between the rigid multibody and the solid. Plasticity is introduced through an associative perfectly plastic modified Drucker-Prager model. The elastic and plastic dynamics are verified for simple test systems, and the capability of simulating tracked terrain vehicles driving on a deformable terrain is demonstrated.
Smoothed particle hydrodynamics method from a large eddy simulation perspective
NASA Astrophysics Data System (ADS)
Di Mascio, A.; Antuono, M.; Colagrossi, A.; Marrone, S.
2017-03-01
The Smoothed Particle Hydrodynamics (SPH) method, often used for the modelling of the Navier-Stokes equations by a meshless Lagrangian approach, is revisited from the point of view of Large Eddy Simulation (LES). To this aim, the LES filtering procedure is recast in a Lagrangian framework by defining a filter that moves with the positions of the fluid particles at the filtered velocity. It is shown that the SPH smoothing procedure can be reinterpreted as a sort of LES Lagrangian filtering, and that, besides the terms coming from the LES convolution, additional contributions (never accounted for in the SPH literature) appear in the equations when formulated in a filtered fashion. Appropriate closure formulas are derived for the additional terms and a preliminary numerical test is provided to show the main features of the proposed LES-SPH model.
Testing the Muon g-2 Anomaly at the LHC
Freitas, Ayres; Lykken, Joseph; Kell, Stefan; ...
2014-05-29
The long-standing difference between the experimental measurement and the standard-model prediction for the muon's anomalous magnetic moment,more » $$a_{\\mu} = (g_{\\mu}-2)/2$$, may be explained by the presence of new weakly interacting particles with masses of a few 100 GeV. Particles of this kind can generally be directly produced at the LHC, and thus they may already be constrained by existing data. In this work, we investigate this connection between $$a_{\\mu}$$ and the LHC in a model-independent approach, by introducing one or two new fields beyond the standard model with spin and weak isospin up to one. For each case, we identify the preferred parameter space for explaining the discrepancy of a_mu and derive bounds using data from LEP and the 8-TeV LHC run. Furthermore, we estimate how these limits could be improved with the 14-TeV LHC. We find that the 8-TeV results already rule out a subset of our simplified models, while almost all viable scenarios can be tested conclusively with 14-TeV data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Letaw, J.R.; Adams, J.H.
The galactic cosmic radiation (GCR) component of space radiation is the dominant cause of single-event phenomena in microelectronic circuits when Earth's magnetic shielding is low. Spaceflights outside the magnetosphere and in high inclination orbits are examples of such circumstances. In high-inclination orbits, low-energy (high LET) particles are transmitted through the field only at extreme latitudes, but can dominate the orbit-averaged dose. GCR is an important part of the radiation dose to astronauts under the same conditions. As a test of the CREME environmental model and particle transport codes used to estimate single event upsets, we have compiled existing measurements ofmore » HZE doses were compiled where GCR is expected to be important: Apollo 16 and 17, Skylab, Apollo Soyuz Test Project, and Kosmos 782. The LET spectra, due to direct ionization from GCR, for each of these missions has been estimated. The resulting comparisons with data validate the CREME model predictions of high-LET galactic cosmic-ray fluxes to within a factor of two. Some systematic differences between the model and data are identified.« less
Stability of volcanic ash aggregates and break-up processes.
Mueller, Sebastian B; Kueppers, Ulrich; Ametsbichler, Jonathan; Cimarelli, Corrado; Merrison, Jonathan P; Poret, Matthieu; Wadsworth, Fabian B; Dingwell, Donald B
2017-08-07
Numerical modeling of ash plume dispersal is an important tool for forecasting and mitigating potential hazards from volcanic ash erupted during explosive volcanism. Recent tephra dispersal models have been expanded to account for dynamic ash aggregation processes. However, there are very few studies on rates of disaggregation during transport. It follows that current models regard ash aggregation as irrevocable and may therefore overestimate aggregation-enhanced sedimentation. In this experimental study, we use industrial granulation techniques to artificially produce aggregates. We subject these to impact tests and evaluate their resistance to break-up processes. We find a dependence of aggregate stability on primary particle size distribution and solid particle binder concentration. We posit that our findings could be combined with eruption source parameters and implemented in future tephra dispersal models.
Model of Fluidized Bed Containing Reacting Solids and Gases
NASA Technical Reports Server (NTRS)
Bellan, Josette; Lathouwers, Danny
2003-01-01
A mathematical model has been developed for describing the thermofluid dynamics of a dense, chemically reacting mixture of solid particles and gases. As used here, "dense" signifies having a large volume fraction of particles, as for example in a bubbling fluidized bed. The model is intended especially for application to fluidized beds that contain mixtures of carrier gases, biomass undergoing pyrolysis, and sand. So far, the design of fluidized beds and other gas/solid industrial processing equipment has been based on empirical correlations derived from laboratory- and pilot-scale units. The present mathematical model is a product of continuing efforts to develop a computational capability for optimizing the designs of fluidized beds and related equipment on the basis of first principles. Such a capability could eliminate the need for expensive, time-consuming predesign testing.
Recent Advances in the LEWICE Icing Model
NASA Technical Reports Server (NTRS)
Wright, William B.; Addy, Gene; Struk, Peter; Bartkus, Tadas
2015-01-01
This paper will describe two recent modifications to the Glenn ICE software. First, a capability for modeling ice crystals and mixed phase icing has been modified based on recent experimental data. Modifications have been made to the ice particle bouncing and erosion model. This capability has been added as part of a larger effort to model ice crystal ingestion in aircraft engines. Comparisons have been made to ice crystal ice accretions performed in the NRC Research Altitude Test Facility (RATFac). Second, modifications were made to the run back model based on data and observations from thermal scaling tests performed in the NRC Altitude Icing Tunnel.
Defante, Adrian P; Vreeland, Wyatt N; Benkstein, Kurt D; Ripple, Dean C
2018-05-01
Nanoparticle tracking analysis (NTA) obtains particle size by analysis of particle diffusion through a time series of micrographs and particle count by a count of imaged particles. The number of observed particles imaged is controlled by the scattering cross-section of the particles and by camera settings such as sensitivity and shutter speed. Appropriate camera settings are defined as those that image, track, and analyze a sufficient number of particles for statistical repeatability. Here, we test if image attributes, features captured within the image itself, can provide measurable guidelines to assess the accuracy for particle size and count measurements using NTA. The results show that particle sizing is a robust process independent of image attributes for model systems. However, particle count is sensitive to camera settings. Using open-source software analysis, it was found that a median pixel area, 4 pixels 2 , results in a particle concentration within 20% of the expected value. The distribution of these illuminated pixel areas can also provide clues about the polydispersity of particle solutions prior to using a particle tracking analysis. Using the median pixel area serves as an operator-independent means to assess the quality of the NTA measurement for count. Published by Elsevier Inc.
2013 R&D 100 Award: DNATrax could revolutionize air quality detection and tracking
Farquar, George
2018-01-16
A team of LLNL scientists and engineers has developed a safe and versatile material, known as DNA Tagged Reagents for Aerosol Experiments (DNATrax), that can be used to reliably and rapidly diagnose airflow patterns and problems in both indoor and outdoor venues. Until DNATrax particles were developed, no rapid or safe way existed to validate air transport models with realistic particles in the range of 1-10 microns. Successful DNATrax testing was conducted at the Pentagon in November 2012 in conjunction with the Pentagon Force Protection Agency. This study enhanced the team's understanding of indoor ventilation environments created by heating, ventilation and air conditioning (HVAC) systems. DNATrax are particles comprised of sugar and synthetic DNA that serve as a bar code for the particle. The potential for creating unique bar-coded particles is virtually unlimited, thus allowing for simultaneous and repeated releases, which dramatically reduces the costs associated with conducting tests for contaminants. Among the applications for the new material are indoor air quality detection, for homes, offices, ships and airplanes; urban particulate tracking, for subway stations, train stations, and convention centers; environmental release tracking; and oil and gas uses, including fracking, to better track fluid flow.
2013 R&D 100 Award: DNATrax could revolutionize air quality detection and tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farquar, George
A team of LLNL scientists and engineers has developed a safe and versatile material, known as DNA Tagged Reagents for Aerosol Experiments (DNATrax), that can be used to reliably and rapidly diagnose airflow patterns and problems in both indoor and outdoor venues. Until DNATrax particles were developed, no rapid or safe way existed to validate air transport models with realistic particles in the range of 1-10 microns. Successful DNATrax testing was conducted at the Pentagon in November 2012 in conjunction with the Pentagon Force Protection Agency. This study enhanced the team's understanding of indoor ventilation environments created by heating, ventilationmore » and air conditioning (HVAC) systems. DNATrax are particles comprised of sugar and synthetic DNA that serve as a bar code for the particle. The potential for creating unique bar-coded particles is virtually unlimited, thus allowing for simultaneous and repeated releases, which dramatically reduces the costs associated with conducting tests for contaminants. Among the applications for the new material are indoor air quality detection, for homes, offices, ships and airplanes; urban particulate tracking, for subway stations, train stations, and convention centers; environmental release tracking; and oil and gas uses, including fracking, to better track fluid flow.« less
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-10-21
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum dose difference within 1.7%. The maximum relative difference of output factors was within 0.5%. Over 98.5% passing rate was achieved in 3D gamma-index tests with 2%/2 mm criteria in both an IMRT prostate patient case and a head-and-neck case. These results demonstrated the efficacy of our model in terms of accurately representing a reference phase-space file. We have also tested the efficiency gain of our source model over our previously developed phase-space-let file source model. The overall efficiency of dose calculation was found to be improved by ~1.3-2.2 times in water and patient cases using our analytical model.
PIV Measurements of the CEV Hot Abort Motor Plume for CFD Validation
NASA Technical Reports Server (NTRS)
Wernet, Mark; Wolter, John D.; Locke, Randy; Wroblewski, Adam; Childs, Robert; Nelson, Andrea
2010-01-01
NASA s next manned launch platform for missions to the moon and Mars are the Orion and Ares systems. Many critical aspects of the launch system performance are being verified using computational fluid dynamics (CFD) predictions. The Orion Launch Abort Vehicle (LAV) consists of a tower mounted tractor rocket tasked with carrying the Crew Module (CM) safely away from the launch vehicle in the event of a catastrophic failure during the vehicle s ascent. Some of the predictions involving the launch abort system flow fields produced conflicting results, which required further investigation through ground test experiments. Ground tests were performed to acquire data from a hot supersonic jet in cross-flow for the purpose of validating CFD turbulence modeling relevant to the Orion Launch Abort Vehicle (LAV). Both 2-component axial plane Particle Image Velocimetry (PIV) and 3-component cross-stream Stereo Particle Image Velocimetry (SPIV) measurements were obtained on a model of an Abort Motor (AM). Actual flight conditions could not be simulated on the ground, so the highest temperature and pressure conditions that could be safely used in the test facility (nozzle pressure ratio 28.5 and a nozzle temperature ratio of 3) were used for the validation tests. These conditions are significantly different from those of the flight vehicle, but were sufficiently high enough to begin addressing turbulence modeling issues that predicated the need for the validation tests.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-02-01
Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.
Monte Carlo simulations of ionization potential depression in dense plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stransky, M., E-mail: stransky@fzu.cz
A particle-particle grand canonical Monte Carlo model with Coulomb pair potential interaction was used to simulate modification of ionization potentials by electrostatic microfields. The Barnes-Hut tree algorithm [J. Barnes and P. Hut, Nature 324, 446 (1986)] was used to speed up calculations of electric potential. Atomic levels were approximated to be independent of the microfields as was assumed in the original paper by Ecker and Kröll [Phys. Fluids 6, 62 (1963)]; however, the available levels were limited by the corresponding mean inter-particle distance. The code was tested on hydrogen and dense aluminum plasmas. The amount of depression was up tomore » 50% higher in the Debye-Hückel regime for hydrogen plasmas, in the high density limit, reasonable agreement was found with the Ecker-Kröll model for hydrogen plasmas and with the Stewart-Pyatt model [J. Stewart and K. Pyatt, Jr., Astrophys. J. 144, 1203 (1966)] for aluminum plasmas. Our 3D code is an improvement over the spherically symmetric simplifications of the Ecker-Kröll and Stewart-Pyatt models and is also not limited to high atomic numbers as is the underlying Thomas-Fermi model used in the Stewart-Pyatt model.« less
Non-linear modeling of RF in fusion grade plasmas
NASA Astrophysics Data System (ADS)
Austin, Travis; Smithe, David; Hakim, Ammar; Jenkins, Thomas
2011-10-01
We are seeking to model nonlinear effects, particularly parametric decay instability in the vicinity of the edge plasma and RF launchers, which is thought to be a potential parasitic loss mechanism. We will use time-domain approaches which treat the full spectrum of modes. Two approaches are being tested for feasibility, a non-linear delta-f particle approach, and a higher order many-fluid closure approach. Our particle approach builds on extensive previous work demonstrating the ability to model IBW waves (one of the PDI daughter waves) with a linear delta-f particle model. Here we report on the performance of such simulations when the linear constraint is relaxed, and in particular on the ability of the low-noise loading scheme, specially developed for RF and ion-time scale physics, to operate and maintain low noise in the non-linear regime. Similarly, a novel high-order closure of the fluid equations is necessary to model the IBW and higher harmonics. We will report on the benchmarking of the fluid closure, and its ability to model the anticipated pump and daughter waves in a PDI scenario. This research supported by US DOE Grant # DE-SC0006242.
Neural Networks for Modeling and Control of Particle Accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, A. L.; Biedron, S. G.; Chase, B. E.
Myriad nonlinear and complex physical phenomena are host to particle accelerators. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems,more » as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Moreover, many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. For the purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We also describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.« less
Neural Networks for Modeling and Control of Particle Accelerators
NASA Astrophysics Data System (ADS)
Edelen, A. L.; Biedron, S. G.; Chase, B. E.; Edstrom, D.; Milton, S. V.; Stabile, P.
2016-04-01
Particle accelerators are host to myriad nonlinear and complex physical phenomena. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems, as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. The purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.
Neural Networks for Modeling and Control of Particle Accelerators
Edelen, A. L.; Biedron, S. G.; Chase, B. E.; ...
2016-04-01
Myriad nonlinear and complex physical phenomena are host to particle accelerators. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems,more » as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Moreover, many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. For the purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We also describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.« less
2017-01-01
Conductive polymer composites are manufactured by randomly dispersing conductive particles along an insulating polymer matrix. Several authors have attempted to model the piezoresistive response of conductive polymer composites. However, all the proposed models rely upon experimental measurements of the electrical resistance at rest state. Similarly, the models available in literature assume a voltage-independent resistance and a stress-independent area for tunneling conduction. With the aim of developing and validating a more comprehensive model, a test bench capable of exerting controlled forces has been developed. Commercially available sensors—which are manufactured from conductive polymer composites—have been tested at different voltages and stresses, and a model has been derived on the basis of equations for the quantum tunneling conduction through thin insulating film layers. The resistance contribution from the contact resistance has been included in the model together with the resistance contribution from the conductive particles. The proposed model embraces a voltage-dependent behavior for the composite resistance, and a stress-dependent behavior for the tunneling conduction area. The proposed model is capable of predicting sensor current based upon information from the sourcing voltage and the applied stress. This study uses a physical (non-phenomenological) approach for all the phenomena discussed here. PMID:28906467
NASA Astrophysics Data System (ADS)
Wang, Peitao; Cai, Meifeng; Ren, Fenhua; Li, Changhong; Yang, Tianhong
2017-07-01
This paper develops a numerical approach to determine the mechanical behavior of discrete fractures network (DFN) models based on digital image processing technique and particle flow code (PFC2D). A series of direct shear tests of jointed rocks were numerically performed to study the effect of normal stress, friction coefficient and joint bond strength on the mechanical behavior of joint rock and evaluate the influence of micro-parameters on the shear properties of jointed rocks using the proposed approach. The complete shear stress-displacement curve of the DFN model under direct shear tests was presented to evaluate the failure processes of jointed rock. The results show that the peak and residual strength are sensitive to normal stress. A higher normal stress has a greater effect on the initiation and propagation of cracks. Additionally, an increase in the bond strength ratio results in an increase in the number of both shear and normal cracks. The friction coefficient was also found to have a significant influence on the shear strength and shear cracks. Increasing in the friction coefficient resulted in the decreasing in the initiation of normal cracks. The unique contribution of this paper is the proposed modeling technique to simulate the mechanical behavior of jointed rock mass based on particle mechanics approaches.
ParticleCall: A particle filter for base calling in next-generation sequencing systems
2012-01-01
Background Next-generation sequencing systems are capable of rapid and cost-effective DNA sequencing, thus enabling routine sequencing tasks and taking us one step closer to personalized medicine. Accuracy and lengths of their reads, however, are yet to surpass those provided by the conventional Sanger sequencing method. This motivates the search for computationally efficient algorithms capable of reliable and accurate detection of the order of nucleotides in short DNA fragments from the acquired data. Results In this paper, we consider Illumina’s sequencing-by-synthesis platform which relies on reversible terminator chemistry and describe the acquired signal by reformulating its mathematical model as a Hidden Markov Model. Relying on this model and sequential Monte Carlo methods, we develop a parameter estimation and base calling scheme called ParticleCall. ParticleCall is tested on a data set obtained by sequencing phiX174 bacteriophage using Illumina’s Genome Analyzer II. The results show that the developed base calling scheme is significantly more computationally efficient than the best performing unsupervised method currently available, while achieving the same accuracy. Conclusions The proposed ParticleCall provides more accurate calls than the Illumina’s base calling algorithm, Bustard. At the same time, ParticleCall is significantly more computationally efficient than other recent schemes with similar performance, rendering it more feasible for high-throughput sequencing data analysis. Improvement of base calling accuracy will have immediate beneficial effects on the performance of downstream applications such as SNP and genotype calling. ParticleCall is freely available at https://sourceforge.net/projects/particlecall. PMID:22776067
Shizgal, Bernie D
2018-05-01
This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988)JSTPBS0022-471510.1007/BF01016429].
Observations and modeling of wave-induced microburst electron precipitation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, T.J.; Wei, R.; Detrick, D.L.
1990-05-01
Energy-time features of X ray microbursts are examined and compared with the predictions of a test particle simulation model of wave-induced electron precipitation resulting from gyroresonant wave-particle interactions in the magnetosphere. An algorithm designed to search the E > 25 keV counting rate data for single isolated microbursts identified 651 events in a 3-hr interval. The distribution of burst durations ranged from 0.2 to 1.2 s. Approximately two-thirds of the distribution were narrow bursts (0.2 - 0.6 s), the rest wide (0.6 - 1.2 s), with the average burst durations equal to {minus}0.4 s and {minus}0.7 s, respectively, for themore » two classes. The precipitation was characterized by exponential electron spectra with e-folding energies Eo of 25-50 keV. Individual and superposed microburst profiles show that the X ray energy spectrum is softest near the peak of the energy influx. Computer simulations of the flux- and energy-time profiles of direct and mirrored electron precipitation induced by a whistler-mode wave pulse of 0.2-s duration and linear frequency increase from 2 to 4 kHz were performed for plasma, energetic particle and wave parameters appropriate for the location and geophysical conditions of the observations. In general, the results provide further support for the guroresonant test particle simulation model, and for the belief that the observed type of microbursts originates in the vicinity of the magnetic equator in a gyroresonant process involving discrete chorus emissions.« less
NASA Astrophysics Data System (ADS)
Shizgal, Bernie D.
2018-05-01
This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988), 10.1007/BF01016429].