Rengasamy, Samy; Miller, Adam; Eimer, Benjamin C
2011-01-01
N95 particulate filtering facepiece respirators are certified by measuring penetration levels photometrically with a presumed severe case test method using charge neutralized NaCl aerosols at 85 L/min. However, penetration values obtained by photometric methods have not been compared with count-based methods using contemporary respirators composed of electrostatic filter media and challenged with both generated and ambient aerosols. To better understand the effects of key test parameters (e.g., particle charge, detection method), initial penetration levels for five N95 model filtering facepiece respirators were measured using NaCl aerosols with the aerosol challenge and test equipment employed in the NIOSH respirator certification method (photometric) and compared with an ultrafine condensation particle counter method (count based) for the same NaCl aerosols as well as for ambient room air particles. Penetrations using the NIOSH test method were several-fold less than the penetrations obtained by the ultrafine condensation particle counter for NaCl aerosols as well as for room particles indicating that penetration measurement based on particle counting offers a more difficult challenge than the photometric method, which lacks sensitivity for particles < 100 nm. All five N95 models showed the most penetrating particle size around 50 nm for room air particles with or without charge neutralization, and at 200 nm for singly charged NaCl monodisperse particles. Room air with fewer charged particles and an overwhelming number of neutral particles contributed to the most penetrating particle size in the 50 nm range, indicating that the charge state for the majority of test particles determines the MPPS. Data suggest that the NIOSH respirator certification protocol employing the photometric method may not be a more challenging aerosol test method. Filter penetrations can vary among workplaces with different particle size distributions, which suggests the need for the development of new or revised "more challenging" aerosol test methods for NIOSH certification of respirators.
NASA Technical Reports Server (NTRS)
Jordan, F. L., Jr.
1980-01-01
As part of basic research to improve aerial applications technology, methods were developed at the Langley Vortex Research Facility to simulate and measure deposition patterns of aerially-applied sprays and granular materials by means of tests with small-scale models of agricultural aircraft and dynamically-scaled test particles. Interactions between the aircraft wake and the dispersed particles are being studied with the objective of modifying wake characteristics and dispersal techniques to increase swath width, improve deposition pattern uniformity, and minimize drift. The particle scaling analysis, test methods for particle dispersal from the model aircraft, visualization of particle trajectories, and measurement and computer analysis of test deposition patterns are described. An experimental validation of the scaling analysis and test results that indicate improved control of chemical drift by use of winglets are presented to demonstrate test methods.
Symplectic test particle encounters: a comparison of methods
NASA Astrophysics Data System (ADS)
Wisdom, Jack
2017-01-01
A new symplectic method for handling encounters of test particles with massive bodies is presented. The new method is compared with several popular methods (RMVS3, SYMBA, and MERCURY). The new method compares favourably.
Wootton, Roy E.
1979-01-01
A method of testing a gas insulated system for the presence of conducting particles. The method includes inserting a gaseous mixture comprising about 98 volume percent nitrogen and about 2 volume percent sulfur hexafluoride into the gas insulated system at a pressure greater than 60 lb./sq. in. gauge, and then applying a test voltage to the system. If particles are present within the system, the gaseous mixture will break down, providing an indicator of the presence of the particles.
Rengasamy, Samy; Eimer, Benjamin C
2012-01-01
National Institute for Occupational Safety and Health (NIOSH) certification test methods employ charge neutralized NaCl or dioctyl phthalate (DOP) aerosols to measure filter penetration levels of air-purifying particulate respirators photometrically using a TSI 8130 automated filter tester at 85 L/min. A previous study in our laboratory found that widely different filter penetration levels were measured for nanoparticles depending on whether a particle number (count)-based detector or a photometric detector was used. The purpose of this study was to better understand the influence of key test parameters, including filter media type, challenge aerosol size range, and detector system. Initial penetration levels for 17 models of NIOSH-approved N-, R-, and P-series filtering facepiece respirators were measured using the TSI 8130 photometric method and compared with the particle number-based penetration (obtained using two ultrafine condensation particle counters) for the same challenge aerosols generated by the TSI 8130. In general, the penetration obtained by the photometric method was less than the penetration obtained with the number-based method. Filter penetration was also measured for ambient room aerosols. Penetration measured by the TSI 8130 photometric method was lower than the number-based ambient aerosol penetration values. Number-based monodisperse NaCl aerosol penetration measurements showed that the most penetrating particle size was in the 50 nm range for all respirator models tested, with the exception of one model at ~200 nm size. Respirator models containing electrostatic filter media also showed lower penetration values with the TSI 8130 photometric method than the number-based penetration obtained for the most penetrating monodisperse particles. Results suggest that to provide a more challenging respirator filter test method than what is currently used for respirators containing electrostatic media, the test method should utilize a sufficient number of particles <100 nm and a count (particle number)-based detector.
Billi, Fabrizio; Benya, Paul; Kavanaugh, Aaron; Adams, John; Ebramzadeh, Edward; McKellop, Harry
2012-02-01
Numerous studies indicate highly crosslinked polyethylenes reduce the wear debris volume generated by hip arthroplasty acetabular liners. This, in turns, requires new methods to isolate and characterize them. We describe a method for extracting polyethylene wear particles from bovine serum typically used in wear tests and for characterizing their size, distribution, and morphology. Serum proteins were completely digested using an optimized enzymatic digestion method that prevented the loss of the smallest particles and minimized their clumping. Density-gradient ultracentrifugation was designed to remove contaminants and recover the particles without filtration, depositing them directly onto a silicon wafer. This provided uniform distribution of the particles and high contrast against the background, facilitating accurate, automated, morphometric image analysis. The accuracy and precision of the new protocol were assessed by recovering and characterizing particles from wear tests of three types of polyethylene acetabular cups (no crosslinking and 5 Mrads and 7.5 Mrads of gamma irradiation crosslinking). The new method demonstrated important differences in the particle size distributions and morphologic parameters among the three types of polyethylene that could not be detected using prior isolation methods. The new protocol overcomes a number of limitations, such as loss of nanometer-sized particles and artifactual clumping, among others. The analysis of polyethylene wear particles produced in joint simulator wear tests of prosthetic joints is a key tool to identify the wear mechanisms that produce the particles and predict and evaluate their effects on periprosthetic tissues.
Real-time detection method and system for identifying individual aerosol particles
Gard, Eric E [San Francisco, CA; Coffee, Keith R [Patterson, CA; Frank, Matthias [Oakland, CA; Tobias, Herbert J [Kensington, CA; Fergenson, David P [Alamo, CA; Madden, Norm [Livermore, CA; Riot, Vincent J [Berkeley, CA; Steele, Paul T [Livermore, CA; Woods, Bruce W [Livermore, CA
2007-08-21
An improved method and system of identifying individual aerosol particles in real time. Sample aerosol particles are collimated, tracked, and screened to determine which ones qualify for mass spectrometric analysis based on predetermined qualification or selection criteria. Screening techniques include one or more of determining particle size, shape, symmetry, and fluorescence. Only qualifying particles passing all screening criteria are subject to desorption/ionization and single particle mass spectrometry to produce corresponding test spectra, which is used to determine the identities of each of the qualifying aerosol particles by comparing the test spectra against predetermined spectra for known particle types. In this manner, activation cycling of a particle ablation laser of a single particle mass spectrometer is reduced.
Lyophilic matrix method for dissolution and release studies of nanoscale particles.
Pessi, Jenni; Svanbäck, Sami; Lassila, Ilkka; Hæggström, Edward; Yliruusi, Jouko
2017-10-25
We introduce a system with a lyophilic matrix to aid dissolution studies of powders and particulate systems. This lyophilic matrix method (LM method) is based on the ability to discriminate between non-dissolved particles and the dissolved species. In the LM method the test substance is embedded in a thin lyophilic core-shell matrix. This permits rapid contact with the dissolution medium while minimizing dispersion of non-dissolved particles without presenting a substantial diffusion barrier. The method produces realistic dissolution and release results for particulate systems, especially those featuring nanoscale particles. By minimizing method-induced effects on the dissolution profile of nanopowders, the LM method overcomes shortcomings associated with current dissolution tests. Copyright © 2017 Elsevier B.V. All rights reserved.
Preparation of calibrated test packages for particle impact noise detection
NASA Technical Reports Server (NTRS)
1977-01-01
A standard calibration method for any particle impact noise detection (PIND) test system used to detect loose particles responsible for failures in hybrid circuits was developed along with a procedure for preparing PIND standard test devices. Hybrid packages were seeded with a single gold ball, hermetically sealed, leak tested, and PIND tested. Conclusions are presented.
A Comprehensive Comparison of Relativistic Particle Integrators
NASA Astrophysics Data System (ADS)
Ripperda, B.; Bacchini, F.; Teunissen, J.; Xia, C.; Porth, O.; Sironi, L.; Lapenta, G.; Keppens, R.
2018-03-01
We compare relativistic particle integrators commonly used in plasma physics, showing several test cases relevant for astrophysics. Three explicit particle pushers are considered, namely, the Boris, Vay, and Higuera–Cary schemes. We also present a new relativistic fully implicit particle integrator that is energy conserving. Furthermore, a method based on the relativistic guiding center approximation is included. The algorithms are described such that they can be readily implemented in magnetohydrodynamics codes or Particle-in-Cell codes. Our comparison focuses on the strengths and key features of the particle integrators. We test the conservation of invariants of motion and the accuracy of particle drift dynamics in highly relativistic, mildly relativistic, and non-relativistic settings. The methods are compared in idealized test cases, i.e., without considering feedback onto the electrodynamic fields, collisions, pair creation, or radiation. The test cases include uniform electric and magnetic fields, {\\boldsymbol{E}}× {\\boldsymbol{B}} fields, force-free fields, and setups relevant for high-energy astrophysics, e.g., a magnetic mirror, a magnetic dipole, and a magnetic null. These tests have direct relevance for particle acceleration in shocks and in magnetic reconnection.
Method for testing the strength and structural integrity of nuclear fuel particles
Lessing, P.A.
1995-10-17
An accurate method for testing the strength of nuclear fuel particles is disclosed. Each particle includes an upper and lower portion, and is placed within a testing apparatus having upper and lower compression members. The upper compression member includes a depression therein which is circular and sized to receive only part of the upper portion of the particle. The lower compression member also includes a similar depression. The compression members are parallel to each other with the depressions therein being axially aligned. The fuel particle is then placed between the compression members and engaged within the depressions. The particle is then compressed between the compression members until it fractures. The amount of force needed to fracture the particle is thereafter recorded. This technique allows a broader distribution of forces and provides more accurate results compared with systems which distribute forces at singular points on the particle. 13 figs.
Method for testing the strength and structural integrity of nuclear fuel particles
Lessing, Paul A.
1995-01-01
An accurate method for testing the strength of nuclear fuel particles. Each particle includes an upper and lower portion, and is placed within a testing apparatus having upper and lower compression members. The upper compression member includes a depression therein which is circular and sized to receive only part of the upper portion of the particle. The lower compression member also includes a similar depression. The compression members are parallel to each other with the depressions therein being axially aligned. The fuel particle is then placed between the compression members and engaged within the depressions. The particle is then compressed between the compression members until it fractures. The amount of force needed to fracture the particle is thereafter recorded. This technique allows a broader distribution of forces and provides more accurate results compared with systems which distribute forces at singular points on the particle.
Real-Time Detection Method And System For Identifying Individual Aerosol Particles
Gard, Eric Evan; Fergenson, David Philip
2005-10-25
A method and system of identifying individual aerosol particles in real time. Sample aerosol particles are compared against and identified with substantially matching known particle types by producing positive and negative test spectra of an individual aerosol particle using a bipolar single particle mass spectrometer. Each test spectrum is compared to spectra of the same respective polarity in a database of predetermined positive and negative spectra for known particle types and a set of substantially matching spectra is obtained. Finally the identity of the individual aerosol particle is determined from the set of substantially matching spectra by determining a best matching one of the known particle types having both a substantially matching positive spectrum and a substantially matching negative spectrum associated with the best matching known particle type.
Cheng, Wen-Chang
2012-01-01
In this paper we propose a robust lane detection and tracking method by combining particle filters with the particle swarm optimization method. This method mainly uses the particle filters to detect and track the local optimum of the lane model in the input image and then seeks the global optimal solution of the lane model by a particle swarm optimization method. The particle filter can effectively complete lane detection and tracking in complicated or variable lane environments. However, the result obtained is usually a local optimal system status rather than the global optimal system status. Thus, the particle swarm optimization method is used to further refine the global optimal system status in all system statuses. Since the particle swarm optimization method is a global optimization algorithm based on iterative computing, it can find the global optimal lane model by simulating the food finding way of fish school or insects under the mutual cooperation of all particles. In verification testing, the test environments included highways and ordinary roads as well as straight and curved lanes, uphill and downhill lanes, lane changes, etc. Our proposed method can complete the lane detection and tracking more accurately and effectively then existing options. PMID:23235453
Comparison of three commercially available fit-test methods.
Janssen, Larry L; Luinenburg, D Michael; Mullins, Haskell E; Nelson, Thomas J
2002-01-01
American National Standards Institute (ANSI) standard Z88.10, Respirator Fit Testing Methods, includes criteria to evaluate new fit-tests. The standard allows generated aerosol, particle counting, or controlled negative pressure quantitative fit-tests to be used as the reference method to determine acceptability of a new test. This study examined (1) comparability of three Occupational Safety and Health Administration-accepted fit-test methods, all of which were validated using generated aerosol as the reference method; and (2) the effect of the reference method on the apparent performance of a fit-test method under evaluation. Sequential fit-tests were performed using the controlled negative pressure and particle counting quantitative fit-tests and the bitter aerosol qualitative fit-test. Of 75 fit-tests conducted with each method, the controlled negative pressure method identified 24 failures; bitter aerosol identified 22 failures; and the particle counting method identified 15 failures. The sensitivity of each method, that is, agreement with the reference method in identifying unacceptable fits, was calculated using each of the other two methods as the reference. None of the test methods met the ANSI sensitivity criterion of 0.95 or greater when compared with either of the other two methods. These results demonstrate that (1) the apparent performance of any fit-test depends on the reference method used, and (2) the fit-tests evaluated use different criteria to identify inadequately fitting respirators. Although "acceptable fit" cannot be defined in absolute terms at this time, the ability of existing fit-test methods to reject poor fits can be inferred from workplace protection factor studies.
40 CFR 53.64 - Test procedure: Static fractionator test.
Code of Federal Regulations, 2012 CFR
2012-07-01
... particles of a given size reaching the sampler filter to the mass concentration of particles of the same.... Methods for generating aerosols shall be identical to those prescribed in § 53.62(c)(2). (2) Particle... (with or without an in-line mixing chamber). Validation particle size and quality shall be conducted at...
40 CFR 53.64 - Test procedure: Static fractionator test.
Code of Federal Regulations, 2013 CFR
2013-07-01
... particles of a given size reaching the sampler filter to the mass concentration of particles of the same.... Methods for generating aerosols shall be identical to those prescribed in § 53.62(c)(2). (2) Particle... (with or without an in-line mixing chamber). Validation particle size and quality shall be conducted at...
40 CFR 53.64 - Test procedure: Static fractionator test.
Code of Federal Regulations, 2014 CFR
2014-07-01
... particles of a given size reaching the sampler filter to the mass concentration of particles of the same.... Methods for generating aerosols shall be identical to those prescribed in § 53.62(c)(2). (2) Particle... (with or without an in-line mixing chamber). Validation particle size and quality shall be conducted at...
NASA Technical Reports Server (NTRS)
Buehler, Martin G. (Inventor); Nixon, Robert H. (Inventor); Soli, George A. (Inventor); Blaes, Brent R. (Inventor)
1995-01-01
A method for predicting the SEU susceptibility of a standard-cell D-latch using an alpha-particle sensitive SRAM, SPICE critical charge simulation results, and alpha-particle interaction physics. A technique utilizing test structures to quickly and inexpensively characterize the SEU sensitivity of standard cell latches intended for use in a space environment. This bench-level approach utilizes alpha particles to induce upsets in a low LET sensitive 4-k bit test SRAM. This SRAM consists of cells that employ an offset voltage to adjust their upset sensitivity and an enlarged sensitive drain junction to enhance the cell's upset rate.
Patel, J; Lal, S; Nuss, K; Wilshaw, S P; von Rechenberg, B; Hall, R M; Tipper, J L
2018-04-15
Less than optimal particle isolation techniques have impeded analysis of orthopaedic wear debris in vivo. The purpose of this research was to develop and test an improved method for particle isolation from tissue. A volume of 0.018 mm 3 of clinically relevant CoCrMo, Ti-6Al-4V or Si 3 N 4 particles was injected into rat stifle joints for seven days of in vivo exposure. Following sacrifice, particles were located within tissues using histology. The particles were recovered by enzymatic digestion of periarticular tissue with papain and proteinase K, followed by ultracentrifugation using a sodium polytungstate density gradient. Particles were recovered from all samples, observed using SEM and the particle composition was verified using EDX, which demonstrated that all isolated particles were free from contamination. Particle size, aspect ratio and circularity were measured using image analysis software. There were no significant changes to the measured parameters of CoCrMo or Si 3 N 4 particles before and after the recovery process (KS tests, p > 0.05). Titanium particles were too few before and after isolation to analyse statistically, though size and morphologies were similar. Overall the method demonstrated a significant improvement to current particle isolation methods from tissue in terms of sensitivity and efficacy at removal of protein, and has the potential to be used for the isolation of ultra-low wearing total joint replacement materials from periprosthetic tissues. This research presents a novel method for the isolation of wear particles from tissue. Methodology outlined in this work would be a valuable resource for future researchers wishing to isolate particles from tissues, either as part of preclinical testing, or from explants from patients for diagnostic purposes. It is increasingly recognised that analysis of wear particles is critical to evaluating the safety of an orthopaedic device. Copyright © 2018 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Semi-Lagrangian particle methods for high-dimensional Vlasov-Poisson systems
NASA Astrophysics Data System (ADS)
Cottet, Georges-Henri
2018-07-01
This paper deals with the implementation of high order semi-Lagrangian particle methods to handle high dimensional Vlasov-Poisson systems. It is based on recent developments in the numerical analysis of particle methods and the paper focuses on specific algorithmic features to handle large dimensions. The methods are tested with uniform particle distributions in particular against a recent multi-resolution wavelet based method on a 4D plasma instability case and a 6D gravitational case. Conservation properties, accuracy and computational costs are monitored. The excellent accuracy/cost trade-off shown by the method opens new perspective for accurate simulations of high dimensional kinetic equations by particle methods.
Method of and apparatus for testing the integrity of filters
Herman, R.L.
1985-05-07
A method of and apparatus are disclosed for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstream upstream and downstream of such filter stage. Samples of the particle concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage. 5 figs.
Method of and apparatus for testing the integrity of filters
Herman, Raymond L [Richland, WA
1985-01-01
A method of and apparatus for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstream upstream and downstream of such filter stage. Samples of the particle concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage.
Numerical sedimentation particle-size analysis using the Discrete Element Method
NASA Astrophysics Data System (ADS)
Bravo, R.; Pérez-Aparicio, J. L.; Gómez-Hernández, J. J.
2015-12-01
Sedimentation tests are widely used to determine the particle size distribution of a granular sample. In this work, the Discrete Element Method interacts with the simulation of flow using the well known one-way-coupling method, a computationally affordable approach for the time-consuming numerical simulation of the hydrometer, buoyancy and pipette sedimentation tests. These tests are used in the laboratory to determine the particle-size distribution of fine-grained aggregates. Five samples with different particle-size distributions are modeled by about six million rigid spheres projected on two-dimensions, with diameters ranging from 2.5 ×10-6 m to 70 ×10-6 m, forming a water suspension in a sedimentation cylinder. DEM simulates the particle's movement considering laminar flow interactions of buoyant, drag and lubrication forces. The simulation provides the temporal/spatial distributions of densities and concentrations of the suspension. The numerical simulations cannot replace the laboratory tests since they need the final granulometry as initial data, but, as the results show, these simulations can identify the strong and weak points of each method and eventually recommend useful variations and draw conclusions on their validity, aspects very difficult to achieve in the laboratory.
Forces on a segregating particle
NASA Astrophysics Data System (ADS)
Lueptow, Richard M.; Shankar, Adithya; Fry, Alexander M.; Ottino, Julio M.; Umbanhowar, Paul B.
2017-11-01
Size segregation in flowing granular materials is not well understood at the particle level. In this study, we perform a series of 3D Discrete Element Method (DEM) simulations to measure the segregation force on a single spherical test particle tethered to a spring in the vertical direction in a shearing bed of particles with gravity acting perpendicular to the shear. The test particle is the same size or larger than the bed particles. At equilibrium, the downward spring force and test particle weight are offset by the upward buoyancy-like force and a size ratio dependent force. We find that the buoyancy-like force depends on the bed particle density and the Voronoi volume occupied by the test particle. By changing the density of the test particle with the particle size ratio such that the buoyancy force matches the test particle weight, we show that the upward size segregation force is a quadratic function of the particle size ratio. Based on this, we report an expression for the net force on a single particle as the sum of a size ratio dependent force, a buoyancy-like force, and the weight of the particle. Supported by NSF Grant CBET-1511450 and the Procter and Gamble Company.
Suzuki, Sara; Aoyama, Yusuke; Umezu, Mitsuo
2017-01-01
Background The mechanical interaction between blood vessels and medical devices can induce strains in these vessels. Measuring and understanding these strains is necessary to identify the causes of vascular complications. This study develops a method to measure the three-dimensional (3D) distribution of strain using tomographic particle image velocimetry (Tomo-PIV) and compares the measurement accuracy with the gauge strain in tensile tests. Methods and findings The test system for measuring 3D strain distribution consists of two cameras, a laser, a universal testing machine, an acrylic chamber with a glycerol water solution for adjusting the refractive index with the silicone, and dumbbell-shaped specimens mixed with fluorescent tracer particles. 3D images of the particles were reconstructed from 2D images using a multiplicative algebraic reconstruction technique (MART) and motion tracking enhancement. Distributions of the 3D displacements were calculated using a digital volume correlation. To evaluate the accuracy of the measurement method in terms of particle density and interrogation voxel size, the gauge strain and one of the two cameras for Tomo-PIV were used as a video-extensometer in the tensile test. The results show that the optimal particle density and interrogation voxel size are 0.014 particles per pixel and 40 × 40 × 40 voxels with a 75% overlap. The maximum measurement error was maintained at less than 2.5% in the 4-mm-wide region of the specimen. Conclusions We successfully developed a method to experimentally measure 3D strain distribution in an elastic silicone material using Tomo-PIV and fluorescent particles. To the best of our knowledge, this is the first report that applies Tomo-PIV to investigate 3D strain measurements in elastic materials with large deformation and validates the measurement accuracy. PMID:28910397
Noiseless Vlasov-Poisson simulations with linearly transformed particles
Pinto, Martin C.; Sonnendrucker, Eric; Friedman, Alex; ...
2014-06-25
We introduce a deterministic discrete-particle simulation approach, the Linearly-Transformed Particle-In-Cell (LTPIC) method, that employs linear deformations of the particles to reduce the noise traditionally associated with particle schemes. Formally, transforming the particles is justified by local first order expansions of the characteristic flow in phase space. In practice the method amounts of using deformation matrices within the particle shape functions; these matrices are updated via local evaluations of the forward numerical flow. Because it is necessary to periodically remap the particles on a regular grid to avoid excessively deforming their shapes, the method can be seen as a development ofmore » Denavit's Forward Semi-Lagrangian (FSL) scheme (Denavit, 1972 [8]). However, it has recently been established (Campos Pinto, 2012 [20]) that the underlying Linearly-Transformed Particle scheme converges for abstract transport problems, with no need to remap the particles; deforming the particles can thus be seen as a way to significantly lower the remapping frequency needed in the FSL schemes, and hence the associated numerical diffusion. To couple the method with electrostatic field solvers, two specific charge deposition schemes are examined, and their performance compared with that of the standard deposition method. Finally, numerical 1d1v simulations involving benchmark test cases and halo formation in an initially mismatched thermal sheet beam demonstrate some advantages of our LTPIC scheme over the classical PIC and FSL methods. Lastly, benchmarked test cases also indicate that, for numerical choices involving similar computational effort, the LTPIC method is capable of accuracy comparable to or exceeding that of state-of-the-art, high-resolution Vlasov schemes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uhlig, W. Casey; Heine, Andreas, E-mail: andreas.heine@emi.fraunhofer.de
2015-11-14
A new measurement technique is suggested to augment the characterization and understanding of hypervelocity projectiles before impact. The electromagnetic technique utilizes magnetic diffusion principles to detect particles, measure velocity, and indicate relative particle dimensions. It is particularly suited for detection of small particles that may be difficult to track utilizing current characterization methods, such as high-speed video or flash radiography but can be readily used for large particle detection, where particle spacing or location is not practical for other measurement systems. In this work, particles down to 2 mm in diameter have been characterized while focusing on confining the detection signalmore » to enable multi-particle characterization with limited particle-to-particle spacing. The focus of the paper is on the theoretical concept and the analysis of its applicability based on analytical and numerical calculation. First proof-of-principle experimental tests serve to further validate the method. Some potential applications are the characterization of particles from a shaped-charge jet after its break-up and investigating debris in impact experiments to test theoretical models for the distribution of particles size, number, and velocity.« less
Eddy Current, Magnetic Particle and Hardness Testing, Aviation Quality Control (Advanced): 9227.04.
ERIC Educational Resources Information Center
Dade County Public Schools, Miami, FL.
This unit of instruction includes the principles of eddy current, magnetic particle and hardness testing; standards used for analyzing test results; techniques of operating equipment; interpretation of indications; advantages and limitations of these methods of testing; care and calibration of equipment; and safety and work precautions. Motion…
Particle damping applied research on mining dump truck vibration control
NASA Astrophysics Data System (ADS)
Song, Liming; Xiao, Wangqiang; Guo, Haiquan; Yang, Zhe; Li, Zeguang
2018-05-01
Vehicle vibration characteristics has become an important evaluation indexes of mining dump truck. In this paper, based on particle damping technology, mining dump truck vibration control was studied by combining the theoretical simulation with actual testing, particle damping technology was successfully used in mining dump truck cab vibration control. Through testing results analysis, with a particle damper, cab vibration was reduced obviously, the methods and basis were provided for vehicle vibration control research and particle damping technology application.
Experimental determination of the oral bioavailability and bioaccessibility of lead particles
2012-01-01
In vivo estimations of Pb particle bioavailability are costly and variable, because of the nature of animal assays. The most feasible alternative for increasing the number of investigations carried out on Pb particle bioavailability is in vitro testing. This testing method requires calibration using in vivo data on an adapted animal model, so that the results will be valid for childhood exposure assessment. Also, the test results must be reproducible within and between laboratories. The Relative Bioaccessibility Leaching Procedure, which is calibrated with in vivo data on soils, presents the highest degree of validation and simplicity. This method could be applied to Pb particles, including those in paint and dust, and those in drinking water systems, which although relevant, have been poorly investigated up to now for childhood exposure assessment. PMID:23173867
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Wei; DeCroix, David; Sun, Xin
The attrition of particles is a major industrial concern in many fluidization systems as it can have undesired effects on the product quality and on the reliable operation of process equipment. Therefore, to accomodate the screening and selection of catalysts for a specific process in fluidized beds, risers, or cyclone applications, their attrition propensity is usually estimated through jet cup attrition testing, where the test material is subjected to high gas velocities in a jet cup. However, this method is far from perfect despite its popularity, largely due to its inconsistency in different testing set-ups. In order to better understandmore » the jet cup testing results as well as their sensitivity to different operating conditions, a coupled computational fluid dynamic (CFD) - discrete element method (DEM) model has been developed in the current study to investigate the particle attrition in a jet cup and its dependence on various factors, e.g. jet velocity, initial particle size, particle density, and apparatus geometry.« less
Kim, Jae-Hoon; Chae, Soyeon; Lee, Yunhee; Han, Geum-Jun; Cho, Byeong-Hoon
2014-11-01
This study compared the sensitivity of three shear test methods for measuring the shear bond strength (SBS) of resin cement to zirconia ceramic and evaluated the effects of surface treatment methods on the bonding. Polished zirconia ceramic (Cercon base, DeguDent) discs were randomly divided into four surface treatment groups: no treatment (C), airborne-particle abrasion (A), conditioning with Alloy primer (Kuraray Medical Co.) (P) and conditioning with Alloy primer after airborne-particle abrasion (AP). The bond strengths of the resin cement (Multilink N, Ivoclar Vivadent) to the zirconia specimens of each surface treatment group were determined by three SBS test methods: the conventional SBS test with direct filling of the mold (Ø 4 mm × 3 mm) with resin cement (Method 1), the conventional SBS test with cementation of composite cylinders (Ø 4 mm × 3 mm) using resin cement (Method 2) and the microshear bond strength (μSBS) test with cementation of composite cylinders (Ø 0.8 mm × 1 mm) using resin cement (Method 3). Both the test method and the surface treatment significantly influenced the SBS values. In Method 3, as the SBS values increased, the coefficients of variation decreased and the Weibull parameters increased. The AP groups showed the highest SBS in all of the test methods. Only in Method 3 did the P group show a higher SBS than the A group. The μSBS test was more sensitive to differentiating the effects of surface treatment methods than the conventional SBS tests. Primer conditioning was a stronger contributing factor for the resin bond to zirconia ceramic than was airborne-particle abrasion.
ESTIMATION OF THE NUMBER OF INFECTIOUS BACTERIAL OR VIRAL PARTICLES BY THE DILUTION METHOD
Seligman, Stephen J.; Mickey, M. Ray
1964-01-01
Seligman, Stephen J. (University of California, Los Angeles), and M. Ray Mickey. Estimation of the number of infectious bacterial or viral particles by the dilution method. J. Bacteriol. 88:31–36. 1964.—For viral or bacterial systems in which discrete foci of infection are not obtainable, it is possible to obtain an estimate of the number of infectious particles by use of the quantal response if the assay system is such that one infectious particle can elicit the response. Unfortunately, the maximum likelihood estimate is difficult to calculate, but, by the use of a modification of Haldane's approximation, it is possible to construct a table which facilitates calculation of both the average number of infectious particles and its relative error. Additional advantages of the method are that the number of test units per dilution can be varied, the dilutions need not bear any fixed relation to each other, and the one-particle hypothesis can be readily tested. PMID:14197902
Jaques, Peter A; Portnoff, Lee
2017-12-01
The risk of workers' exposure to aerosolized particles has increased with the upsurge in the production of engineered nanomaterials. Currently, a whole-body standard test method for measuring particle penetration through protective clothing ensembles is not available. Those available for respirators neglect the most common challenges to ensembles, because they use active vacuum-based filtration, designed to simulate breathing, rather than the positive forces of wind experienced by workers. Thus, a passive method that measures wind-driven particle penetration through ensemble fabric has been developed and evaluated. The apparatus includes a multidomain magnetic passive aerosol sampler housed in a shrouded penetration cell. Performance evaluation was conducted in a recirculation aerosol wind tunnel using paramagnetic Fe 3 O 4 (i.e., iron (II, III) oxide) particles for the challenge aerosol. The particles were collected on a PVC substrate and quantified using a computer-controlled scanning electron microscope. Particle penetration levels were determined by taking the ratio of the particle number collected on the substrate with a fabric (sample) to that without a fabric (control). Results for each fabric obtained by this passive method were compared to previous results from an automated vacuum-based active fractional efficiency tester (TSI 3160), which used sodium chloride particles as the challenge aerosol. Four nonwoven fabrics with a range of thicknesses, porosities, and air permeabilities were evaluated. Smoke tests and flow modeling showed the passive sampler shroud provided smooth (non-turbulent) air flow along the exterior of the sampler, such that disturbance of flow stream lines and distortion of the particle size distribution were reduced. Differences between the active and passive approaches were as high as 5.5-fold for the fabric with the lowest air permeability (0.00067 m/sec-Pa), suggesting the active method overestimated penetration in dense fabrics because the active method draws air at a constant flow rate regardless of the resistance of the test fabric. The passive method indicated greater sensitivity since penetration decreased in response to the increase in permeability.
NASA Technical Reports Server (NTRS)
Colver, Gerald M.; Greene, Nathanael; Shoemaker, David; Xu, Hua
2003-01-01
The Electric Particulate Suspension (EPS) is a combustion ignition system being developed at Iowa State University for evaluating quenching effects of powders in microgravity (quenching distance, ignition energy, flammability limits). Because of the high cloud uniformity possible and its simplicity, the EPS method has potential for "benchmark" design of quenching flames that would provide NASA and the scientific community with a new fire standard. Microgravity is expected to increase suspension uniformity even further and extend combustion testing to higher concentrations (rich fuel limit) than is possible at normal gravity. Two new combustion parameters are being investigated with this new method: (1) the particle velocity distribution and (2) particle-oxidant slip velocity. Both walls and (inert) particles can be tested as quenching media. The EPS method supports combustion modeling by providing accurate measurement of flame-quenching distance as a parameter in laminar flame theory as it closely relates to characteristic flame thickness and flame structure. Because of its design simplicity, EPS is suitable for testing on the International Space Station (ISS). Laser scans showing stratification effects at 1-g have been studied for different materials, aluminum, glass, and copper. PTV/PIV and a leak hole sampling rig give particle velocity distribution with particle slip velocity evaluated using LDA. Sample quenching and ignition energy curves are given for aluminum powder. Testing is planned for the KC-135 and NASA s two second drop tower. Only 1-g ground-based data have been reported to date.
Adaptive time-stepping Monte Carlo integration of Coulomb collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarkimaki, Konsta; Hirvijoki, E.; Terava, J.
Here, we report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell–Jüttner statistics. The implementation is based on the Beliaev–Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space.
Adaptive time-stepping Monte Carlo integration of Coulomb collisions
Sarkimaki, Konsta; Hirvijoki, E.; Terava, J.
2017-10-12
Here, we report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell–Jüttner statistics. The implementation is based on the Beliaev–Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space.
40 CFR 798.4350 - Inhalation developmental toxicity study.
Code of Federal Regulations, 2011 CFR
2011-07-01
... particles of the test substance. It is used to compare particles of different sizes, shapes, and densities... substance given daily per unit volume of air. (c) Principle of the test method. The test substance is...) The temperature at which the test is performed should be maintained at 22 °C (±2°) for rodents or 20...
Far Field Modeling Methods For Characterizing Surface Detonations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrett, A.
2015-10-08
Savannah River National Laboratory (SRNL) analyzed particle samples collected during experiments that were designed to replicate tests of nuclear weapons components that involve detonation of high explosives (HE). SRNL collected the particle samples in the HE debris cloud using innovative rocket propelled samplers. SRNL used scanning electronic microscopy to determine the elemental constituents of the particles and their size distributions. Depleted uranium composed about 7% of the particle contents. SRNL used the particle size distributions and elemental composition to perform transport calculations that indicate in many terrains and atmospheric conditions the uranium bearing particles will be transported long distances downwind.more » This research established that HE tests specific to nuclear proliferation should be detectable at long downwind distances by sampling airborne particles created by the test detonations.« less
NASA Technical Reports Server (NTRS)
Colver, Gerald M.; Goroshin, Samuel; Lee, John H. S.
2001-01-01
A cooperative study is being carried out between Iowa State University and McGill University. The new study concerns wall and particle quenching effects in particle-gas mixtures. The primary objective is to measure and interpret flame quenching distances, flammability limits, and burning velocities in particulate suspensions. A secondary objective is to measure particle slip velocities and particle velocity distribution as these influence flame propagation. Two suspension techniques will be utilized and compared: (1) electric particle suspension/EPS; and (2) flow dispersion. Microgravity tests will permit testing of larger particles and higher and more uniform dust concentrations than is possible in normal gravity.
Measuring particle charge in an rf dusty plasma
NASA Astrophysics Data System (ADS)
Fung, Jerome; Liu, Bin; Goree, John; Nosenko, Vladimir
2004-11-01
A dusty plasma is an ionized gas containing micron-size particles of solid matter. A particle gains a large negative charge by collecting electrons and ions from the plasma. In a gas discharge, particles can be levitated by the sheath electric field above a horizontal planar electrode. Most dusty plasma experiments require a knowledge of the particle charge, which is a key parameter for all interactions with other particles and the plasma electric field. Several methods have been developed in the literature to measure the charge. The vertical resonance method uses Langmuir probe measurements of the ion density and video camera measurements of the amplitude of vertical particle oscillations, which are excited by modulating the rf voltage. Here, we report a new method that is a variation of the vertical resonance method. It uses the plasma potential and particle height, which can be measured more accurately than the ion density. We tested this method and compared the resulting charge to values obtained using the original resonance method as well as sound speed methods. Work supported by an NSF REU grant, NASA and DOE.
ERIC Educational Resources Information Center
Groseclose, Richard
This third in a series of six modules for a course titled Nondestructive Examination (NDE) Techniques II explains the principles of magnets and magnetic fields and how they are applied in magnetic particle testing, describes the theory and methods of magnetizing test specimens, describes the test equipment used, discusses the principles and…
Guided particle swarm optimization method to solve general nonlinear optimization problems
NASA Astrophysics Data System (ADS)
Abdelhalim, Alyaa; Nakata, Kazuhide; El-Alem, Mahmoud; Eltawil, Amr
2018-04-01
The development of hybrid algorithms is becoming an important topic in the global optimization research area. This article proposes a new technique in hybridizing the particle swarm optimization (PSO) algorithm and the Nelder-Mead (NM) simplex search algorithm to solve general nonlinear unconstrained optimization problems. Unlike traditional hybrid methods, the proposed method hybridizes the NM algorithm inside the PSO to improve the velocities and positions of the particles iteratively. The new hybridization considers the PSO algorithm and NM algorithm as one heuristic, not in a sequential or hierarchical manner. The NM algorithm is applied to improve the initial random solution of the PSO algorithm and iteratively in every step to improve the overall performance of the method. The performance of the proposed method was tested over 20 optimization test functions with varying dimensions. Comprehensive comparisons with other methods in the literature indicate that the proposed solution method is promising and competitive.
Extension of moment projection method to the fragmentation process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Shaohua; Yapp, Edward K.Y.; Akroyd, Jethro
2017-04-15
The method of moments is a simple but efficient method of solving the population balance equation which describes particle dynamics. Recently, the moment projection method (MPM) was proposed and validated for particle inception, coagulation, growth and, more importantly, shrinkage; here the method is extended to include the fragmentation process. The performance of MPM is tested for 13 different test cases for different fragmentation kernels, fragment distribution functions and initial conditions. Comparisons are made with the quadrature method of moments (QMOM), hybrid method of moments (HMOM) and a high-precision stochastic solution calculated using the established direct simulation algorithm (DSA) and advantagesmore » of MPM are drawn.« less
Hybrid finite element and Brownian dynamics method for charged particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huber, Gary A., E-mail: ghuber@ucsd.edu; Miao, Yinglong; Zhou, Shenggao
2016-04-28
Diffusion is often the rate-determining step in many biological processes. Currently, the two main computational methods for studying diffusion are stochastic methods, such as Brownian dynamics, and continuum methods, such as the finite element method. A previous study introduced a new hybrid diffusion method that couples the strengths of each of these two methods, but was limited by the lack of interactions among the particles; the force on each particle had to be from an external field. This study further develops the method to allow charged particles. The method is derived for a general multidimensional system and is presented usingmore » a basic test case for a one-dimensional linear system with one charged species and a radially symmetric system with three charged species.« less
Chemical test for mammalian feces in grain products: collaborative study.
Gerber, H R
1989-01-01
A collaborative study was conducted to validate the use of the AOAC alkaline phosphatase method for mammalian feces in corn meal, 44.B01-44.B06, for 7 additional products: brown rice cream, oat bran, grits, semolina, pasta flour, farina, and barley plus (a mixture of barley, oat bran, and brown rice). The proposed method determines the presence of alkaline phosphatase, an enzyme contained in mammalian feces, by using phenolphthalein diphosphate as the enzyme substrate in a test agar medium. Fecal matter is separated from the grain products by specific gravity differences in 1% test agar. As the product is distributed on liquid test agar, fecal fragments float while the grain products sink. The alkaline phosphatase cleaves phosphate radicals from phenolphthalein diphosphate, generating free phenolphthalein, which produces a pink to red-purple color around the fecal particles in the previously colorless medium. Collaborators' recovery averages ranged from 21.7 particles (72.3%) for oat bran to 25.3 particles (84.3%) for semolina at the 30 particle spike level. Overall average background was 0.4 positive reactions per food type. The collaborators reported that the method was quick, simple, and easy to use. The method has been approved interim official first action for all 7 grain products.
IMPLEMENTATION OF SINK PARTICLES IN THE ATHENA CODE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong Hao; Ostriker, Eve C., E-mail: hgong@astro.umd.edu, E-mail: eco@astro.princeton.edu
2013-01-15
We describe the implementation and tests of sink particle algorithms in the Eulerian grid-based code Athena. The introduction of sink particles enables the long-term evolution of systems in which localized collapse occurs, and it is impractical (or unnecessary) to resolve the accretion shocks at the centers of collapsing regions. We discuss the similarities and differences of our methods compared to other implementations of sink particles. Our criteria for sink creation are motivated by the properties of the Larson-Penston collapse solution. We use standard particle-mesh methods to compute particle and gas gravity together. Accretion of mass and momenta onto sinks ismore » computed using fluxes returned by the Riemann solver. A series of tests based on previous analytic and numerical collapse solutions is used to validate our method and implementation. We demonstrate use of our code for applications with a simulation of planar converging supersonic turbulent flow, in which multiple cores form and collapse to create sinks; these sinks continue to interact and accrete from their surroundings over several Myr.« less
Approach to magnetic neutron capture therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuznetsov, Anatoly A.; Podoynitsyn, Sergey N.; Filippov, Victor I.
2005-11-01
Purpose: The method of magnetic neutron capture therapy can be described as a combination of two methods: magnetic localization of drugs using magnetically targeted carriers and neutron capture therapy itself. Methods and Materials: In this work, we produced and tested two types of particles for such therapy. Composite ultradispersed ferro-carbon (Fe-C) and iron-boron (Fe-B) particles were formed from vapors of respective materials. Results: Two-component ultradispersed particles, containing Fe and C, were tested as magnetic adsorbent of L-boronophenylalanine and borax and were shown that borax sorption could be effective for creation of high concentration of boron atoms in the area ofmore » tumor. Kinetics of boron release into the physiologic solution demonstrate that ultradispersed Fe-B (10%) could be applied for an effective magnetic neutron capture therapy. Conclusion: Both types of the particles have high magnetization and magnetic homogeneity, allow to form stable magnetic suspensions, and have low toxicity.« less
Tracking and people counting using Particle Filter Method
NASA Astrophysics Data System (ADS)
Sulistyaningrum, D. R.; Setiyono, B.; Rizky, M. S.
2018-03-01
In recent years, technology has developed quite rapidly, especially in the field of object tracking. Moreover, if the object under study is a person and the number of people a lot. The purpose of this research is to apply Particle Filter method for tracking and counting people in certain area. Tracking people will be rather difficult if there are some obstacles, one of which is occlusion. The stages of tracking and people counting scheme in this study include pre-processing, segmentation using Gaussian Mixture Model (GMM), tracking using particle filter, and counting based on centroid. The Particle Filter method uses the estimated motion included in the model used. The test results show that the tracking and people counting can be done well with an average accuracy of 89.33% and 77.33% respectively from six videos test data. In the process of tracking people, the results are good if there is partial occlusion and no occlusion
Evaluation of filter media for particle number, surface area and mass penetrations.
Li, Lin; Zuo, Zhili; Japuntich, Daniel A; Pui, David Y H
2012-07-01
The National Institute for Occupational Safety and Health (NIOSH) developed a standard for respirator certification under 42 CFR Part 84, using a TSI 8130 automated filter tester with photometers. A recent study showed that photometric detection methods may not be sensitive for measuring engineered nanoparticles. Present NIOSH standards for penetration measurement are mass-based; however, the threshold limit value/permissible exposure limit for an engineered nanoparticle worker exposure is not yet clear. There is lack of standardized filter test development for engineered nanoparticles, and development of a simple nanoparticle filter test is indicated. To better understand the filter performance against engineered nanoparticles and correlations among different tests, initial penetration levels of one fiberglass and two electret filter media were measured using a series of polydisperse and monodisperse aerosol test methods at two different laboratories (University of Minnesota Particle Technology Laboratory and 3M Company). Monodisperse aerosol penetrations were measured by a TSI 8160 using NaCl particles from 20 to 300 nm. Particle penetration curves and overall penetrations were measured by scanning mobility particle sizer (SMPS), condensation particle counter (CPC), nanoparticle surface area monitor (NSAM), and TSI 8130 at two face velocities and three layer thicknesses. Results showed that reproducible, comparable filtration data were achieved between two laboratories, with proper control of test conditions and calibration procedures. For particle penetration curves, the experimental results of monodisperse testing agreed well with polydisperse SMPS measurements. The most penetrating particle sizes (MPPSs) of electret and fiberglass filter media were ~50 and 160 nm, respectively. For overall penetrations, the CPC and NSAM results of polydisperse aerosols were close to the penetration at the corresponding median particle sizes. For each filter type, power-law correlations between the penetrations measured by different instruments show that the NIOSH TSI 8130 test may be used to predict penetrations at the MPPS as well as the CPC and NSAM results with polydisperse aerosols. It is recommended to use dry air (<20% RH) as makeup air in the test system to prevent sodium chloride particle deliquescing and minimizing the challenge particle dielectric constant and to use an adequate neutralizer to fully neutralize the polydisperse challenge aerosol. For a simple nanoparticle penetration test, it is recommended to use a polydisperse aerosol challenge with a geometric mean of ~50 nm with the CPC or the NSAM as detectors.
NASA Astrophysics Data System (ADS)
Leinhardt, Zoë M.; Richardson, Derek C.
2005-08-01
We present a new code ( companion) that identifies bound systems of particles in O(NlogN) time. Simple binaries consisting of pairs of mutually bound particles and complex hierarchies consisting of collections of mutually bound particles are identifiable with this code. In comparison, brute force binary search methods scale as O(N) while full hierarchy searches can be as expensive as O(N), making analysis highly inefficient for multiple data sets with N≳10. A simple test case is provided to illustrate the method. Timing tests demonstrating O(NlogN) scaling with the new code on real data are presented. We apply our method to data from asteroid satellite simulations [Durda et al., 2004. Icarus 167, 382-396; Erratum: Icarus 170, 242; reprinted article: Icarus 170, 243-257] and note interesting multi-particle configurations. The code is available at http://www.astro.umd.edu/zoe/companion/ and is distributed under the terms and conditions of the GNU Public License.
Variational Algorithms for Test Particle Trajectories
NASA Astrophysics Data System (ADS)
Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.
2015-11-01
The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.
NASA Astrophysics Data System (ADS)
Mansouri, Amir
The surface degradation of equipment due to consecutive impacts of abrasive particles carried by fluid flow is called solid particle erosion. Solid particle erosion occurs in many industries including oil and gas. In order to prevent abrupt failures and costly repairs, it is essential to predict the erosion rate and identify the locations of the equipment that are mostly at risk. Computational Fluid Dynamics (CFD) is a powerful tool for predicting the erosion rate. Erosion prediction using CFD analysis includes three steps: (1) obtaining flow solution, (2) particle tracking and calculating the particle impact speed and angle, and (3) relating the particle impact information to mass loss of material through an erosion equation. Erosion equations are commonly generated using dry impingement jet tests (sand-air), since the particle impact speed and angle are assumed not to deviate from conditions in the jet. However, in slurry flows, a wide range of particle impact speeds and angles are produced in a single slurry jet test with liquid and sand particles. In this study, a novel and combined CFD/experimental method for developing an erosion equation in slurry flows is presented. In this method, a CFD analysis is used to characterize the particle impact speed, angle, and impact rate at specific locations on the test sample. Then, the particle impact data are related to the measured erosion depth to achieve an erosion equation from submerged testing. Traditionally, it was assumed that the erosion equation developed based on gas testing can be used for both gas-sand and liquid-sand flows. The erosion equations developed in this work were implemented in a CFD code, and CFD predictions were validated for various test conditions. It was shown that the erosion equation developed based on slurry tests can significantly improve the local thickness loss prediction in slurry flows. Finally, a generalized erosion equation is proposed which can be used to predict the erosion rate in gas-sand, water-sand and viscous liquid-sand flows with high accuracy. Furthermore, in order to gain a better understanding of the erosion mechanism, a comprehensive experimental study was conducted to investigate the important factors influencing the erosion rate in gas-sand and slurry flows. The wear pattern and total erosion ratio were measured in a direct impingement jet geometry (for both dry impact and submerged impingement jets). The effects of fluid viscosity, abrasive particle size, particle impact speed, jet inclination angle, standoff distance, sand concentration, and exposure time were investigated. Also, the eroded samples were studied with Scanning Electron Microscopy (SEM) to understand the erosion micro-structure. Also, the sand particle impact speed and angle were measured using a Particle Image Velocimetry (PIV) system. The measurements were conducted in two types of erosion testers (gas-solid and liquid-solid impinging jets). The Particle Tracking Velocimetry (PTV) technique was utilized which is capable of tracking individual small particles. Moreover, CFD modeling was performed to predict the particle impact data. Very good agreement between the CFD results and PTV measurements was observed.
Methods of and apparatus for testing the integrity of filters
Herman, R.L.
1984-01-01
A method of and apparatus for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstram upstream and downstream of such filter stage. Samples of the particel concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage.
NASA Astrophysics Data System (ADS)
Furuichi, Mikito; Nishiura, Daisuke
2017-10-01
We developed dynamic load-balancing algorithms for Particle Simulation Methods (PSM) involving short-range interactions, such as Smoothed Particle Hydrodynamics (SPH), Moving Particle Semi-implicit method (MPS), and Discrete Element method (DEM). These are needed to handle billions of particles modeled in large distributed-memory computer systems. Our method utilizes flexible orthogonal domain decomposition, allowing the sub-domain boundaries in the column to be different for each row. The imbalances in the execution time between parallel logical processes are treated as a nonlinear residual. Load-balancing is achieved by minimizing the residual within the framework of an iterative nonlinear solver, combined with a multigrid technique in the local smoother. Our iterative method is suitable for adjusting the sub-domain frequently by monitoring the performance of each computational process because it is computationally cheaper in terms of communication and memory costs than non-iterative methods. Numerical tests demonstrated the ability of our approach to handle workload imbalances arising from a non-uniform particle distribution, differences in particle types, or heterogeneous computer architecture which was difficult with previously proposed methods. We analyzed the parallel efficiency and scalability of our method using Earth simulator and K-computer supercomputer systems.
AN AUTOMATED SYSTEM FOR PRODUCING UNIFORM SURFACE DEPOSITS OF DRY PARTICLES
A laboratory system has been constructed that uniformly deposits dry particles onto any type of test surface. Devised as a quality assurance tool for the purpose of evaluating surface sampling methods for lead, it also may be used to generate test surfaces for any contaminant ...
Quasi-three-dimensional particle imaging with digital holography.
Kemppinen, Osku; Heinson, Yuli; Berg, Matthew
2017-05-01
In this work, approximate three-dimensional structures of microparticles are generated with digital holography using an automated focus method. This is done by stacking a collection of silhouette-like images of a particle reconstructed from a single in-line hologram. The method enables estimation of the particle size in the longitudinal and transverse dimensions. Using the discrete dipole approximation, the method is tested computationally by simulating holograms for a variety of particles and attempting to reconstruct the known three-dimensional structure. It is found that poor longitudinal resolution strongly perturbs the reconstructed structure, yet the method does provide an approximate sense for the structure's longitudinal dimension. The method is then applied to laboratory measurements of holograms of single microparticles and their scattering patterns.
A method for grindability testing using the Scirocco disperser.
Bonakdar, Tina; Ali, Muzammil; Dogbe, Selasi; Ghadiri, Mojtaba; Tinke, Arjen
2016-03-30
In the early stages of development of a new Active Pharmaceutical Ingredient (API), insufficient material quantity is available for addressing processing issues, and it is highly desirable to be able to assess processability issues using the smallest possible powder sample quantity. A good example is milling of new active pharmaceutical ingredients. For particle breakage that is sensitive to strain rate, impact testing is the most appropriate method. However, there is no commercially available single particle impact tester for fine particulate solids. In contrast, dry powder dispersers, such as the Scirocco disperser of the Malvern Mastersizer 2000, are widely available, and can be used for this purpose, provided particle impact velocity is known. However, the distance within which the particles can accelerate before impacting on the bend is very short and different particle sizes accelerate to different velocities before impact. As the breakage is proportional to the square of impact velocity, the interpretation of breakage data is not straightforward and requires an analysis of particle velocity as a function of size, density and shape. We report our work using an integrated experimental and CFD modelling approach to evaluate the suitability of this device as a grindability testing device, with the particle sizing being done immediately following dispersion by laser diffraction. Aspirin, sucrose and α-lactose monohydrate are tested using narrow sieve cuts in order to minimise variations in impact velocity. The tests are carried out at eight different air nozzle pressures. As intuitively expected, smaller particles accelerate faster and impact the wall at a higher velocity compared to the larger particles. However, for a given velocity the extent of breakage of larger particles is larger. Using a numerical simulation based on CFD, the relationship between impact velocity and particle size and density has been established assuming a spherical shape, and using one-way coupling, as the particle concentration is very low. Taking account of these dependencies, a clear unification of the change in the specific surface area as a function of particle size, density and impact velocity is observed, and the slope of the fitted line gives a measure of grindability for each material. The trend of data obtained here matches the one obtained by single particle impact testing. Hence aerodynamic dispersion of solids by the Scirocco disperser can be used to evaluate the ease of grindability of different materials. Copyright © 2016 Elsevier B.V. All rights reserved.
Qualification of oil-based tracer particles for heated Ludwieg tubes
NASA Astrophysics Data System (ADS)
Casper, Marcus; Stephan, Sören; Scholz, Peter; Radespiel, Rolf
2014-06-01
The generation, insertion, pressurization and use of oil-based tracer particles is qualified for the application in heated flow facilities, typically hypersonic facilities such as Ludwieg tubes. The operative challenges are to ensure a sub-critical amount of seeding material in the heated part, to qualify the methods that are used to generate the seeding, pressurize it to storage tube pressure, as well as to test specific oil types. The mass of the seeding material is held below the lower explosion limit such that operation is safe. The basis for the tracers is qualified in off-situ particle size measurements. In the main part different methods and operational procedures are tested with respect to their ability to generate a suitable amount of seeding in the test section. For the best method the relaxation time of the tracers is qualified by the oblique shock wave test. The results show that the use of a special temperature resistant lubricant oil "Plantfluid" is feasible under the conditions of a Mach-6 Ludwieg tube with heated storage tube. The method gives high-quality tracers with high seeding densities. Although the experimental results of the oblique shock wave test differ from theoretical predictions of relaxation time, still the relaxation time of 3.2 μs under the more dense tunnel conditions with 18 bar storage tube pressure is low enough to allow the use of the seeding for meaningful particle image velocimetry studies.
Improved Thermoplastic/Iron-Particle Transformer Cores
NASA Technical Reports Server (NTRS)
Wincheski, Russell A.; Bryant, Robert G.; Namkung, Min
2004-01-01
A method of fabricating improved transformer cores from composites of thermoplastic matrices and iron-particles has been invented. Relative to commercially available laminated-iron-alloy transformer cores, the cores fabricated by this method weigh less and are less expensive. Relative to prior polymer-matrix/ iron-particle composite-material transformer cores, the cores fabricated by this method can be made mechanically stronger and more magnetically permeable. In addition, whereas some prior cores have exhibited significant eddy-current losses, the cores fabricated by this method exhibit very small eddy-current losses. The cores made by this method can be expected to be attractive for use in diverse applications, including high-signal-to-noise transformers, stepping motors, and high-frequency ignition coils. The present method is a product of an experimental study of the relationships among fabrication conditions, final densities of iron particles, and mechanical and electromagnetic properties of fabricated cores. Among the fabrication conditions investigated were molding pressures (83, 104, and 131 MPa), and molding temperatures (250, 300, and 350 C). Each block of core material was made by uniaxial-compression molding, at the applicable pressure/temperature combination, of a mixture of 2 weight percent of LaRC (or equivalent high-temperature soluble thermoplastic adhesive) with 98 weight percent of approximately spherical iron particles having diameters in the micron range. Each molded block was cut into square cross-section rods that were used as core specimens in mechanical and electromagnetic tests. Some of the core specimens were annealed at 900 C and cooled slowly before testing. For comparison, a low-carbon-steel core was also tested. The results of the tests showed that density, hardness, and rupture strength generally increased with molding pressure and temperature, though the correlation was rather weak. The weakness of the correlation was attributed to the pores in the specimens. The maximum relative permeabilities of cores made without annealing ranged from 30 to 110, while those of cores made with annealing ranged from 900 to 1,400. However, the greater permeabilities of the annealed specimens were not associated with noticeably greater densities. The major practical result of the investigation was the discovery of an optimum distribution of iron-particle sizes: It was found that eddy-current losses in the molded cores were minimized by using 100 mesh (corresponding to particles with diameters less than or equal to 100 m) iron particles. The effect of optimization of particle sizes on eddy-current losses is depicted in the figure.
A deformable particle-in-cell method for advective transport in geodynamic modeling
NASA Astrophysics Data System (ADS)
Samuel, Henri
2018-06-01
This paper presents an improvement of the particle-in-cell method commonly used in geodynamic modeling for solving pure advection of sharply varying fields. Standard particle-in-cell approaches use particle kernels to transfer the information carried by the Lagrangian particles to/from the Eulerian grid. These kernels are generally one-dimensional and non-evolutive, which leads to the development of under- and over-sampling of the spatial domain by the particles. This reduces the accuracy of the solution, and may require the use of a prohibitive amount of particles in order to maintain the solution accuracy to an acceptable level. The new proposed approach relies on the use of deformable kernels that account for the strain history in the vicinity of particles. It results in a significant improvement of the spatial sampling by the particles, leading to a much higher accuracy of the numerical solution, for a reasonable computational extra cost. Various 2D tests were conducted to compare the performances of the deformable particle-in-cell method with the particle-in-cell approach. These consistently show that at comparable accuracy, the deformable particle-in-cell method was found to be four to six times more efficient than standard particle-in-cell approaches. The method could be adapted to 3D space and generalized to cases including motionless transport.
Mayhew, Terry M; Mühlfeld, Christian; Vanhecke, Dimitri; Ochs, Matthias
2009-04-01
Detecting, localising and counting ultrasmall particles and nanoparticles in sub- and supra-cellular compartments are of considerable current interest in basic and applied research in biomedicine, bioscience and environmental science. For particles with sufficient contrast (e.g. colloidal gold, ferritin, heavy metal-based nanoparticles), visualization requires the high resolutions achievable by transmission electron microscopy (TEM). Moreover, if particles can be counted, their spatial distributions can be subjected to statistical evaluation. Whatever the level of structural organisation, particle distributions can be compared between different compartments within a given structure (cell, tissue and organ) or between different sets of structures (in, say, control and experimental groups). Here, a portfolio of stereology-based methods for drawing such comparisons is presented. We recognise two main scenarios: (1) section surface localisation, in which particles, exemplified by antibody-conjugated colloidal gold particles or quantum dots, are distributed at the section surface during post-embedding immunolabelling, and (2) section volume localisation (or full section penetration), in which particles are contained within the cell or tissue prior to TEM fixation and embedding procedures. Whatever the study aim or hypothesis, the methods for quantifying particles rely on the same basic principles: (i) unbiased selection of specimens by multistage random sampling, (ii) unbiased estimation of particle number and compartment size using stereological test probes (points, lines, areas and volumes), and (iii) statistical testing of an appropriate null hypothesis. To compare different groups of cells or organs, a simple and efficient approach is to compare the observed distributions of raw particle counts by a combined contingency table and chi-squared analysis. Compartmental chi-squared values making substantial contributions to total chi-squared values help identify where the main differences between distributions reside. Distributions between compartments in, say, a given cell type, can be compared using a relative labelling index (RLI) or relative deposition index (RDI) combined with a chi-squared analysis to test whether or not particles preferentially locate in certain compartments. This approach is ideally suited to analysing particles located in volume-occupying compartments (organelles or tissue spaces) or surface-occupying compartments (membranes) and expected distributions can be generated by the stereological devices of point, intersection and particle counting. Labelling efficiencies (number of gold particles per antigen molecule) in immunocytochemical studies can be determined if suitable calibration methods (e.g. biochemical assays of golds per membrane surface or per cell) are available. In addition to relative quantification for between-group and between-compartment comparisons, stereological methods also permit absolute quantification, e.g. total volumes, surfaces and numbers of structures per cell. Here, the utility, limitations and recent applications of these methods are reviewed.
40 CFR 53.61 - Test conditions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... part of the equivalent method application. (e) Particle concentration measurements. All measurements of particle concentration must be made such that the relative error in measurement is less than 5.0 percent... particle concentration detector, X is the measured concentration, and the units of s and X are identical...
40 CFR 53.61 - Test conditions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... part of the equivalent method application. (e) Particle concentration measurements. All measurements of particle concentration must be made such that the relative error in measurement is less than 5.0 percent... particle concentration detector, X is the measured concentration, and the units of s and X are identical...
40 CFR 53.61 - Test conditions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... part of the equivalent method application. (e) Particle concentration measurements. All measurements of particle concentration must be made such that the relative error in measurement is less than 5.0 percent... particle concentration detector, X is the measured concentration, and the units of s and X are identical...
Tribological Technology. Volume II.
1982-09-01
rolling bearings, gears, and sliding bearings produce distinctive particles. An atlas of such particles is available2 9 . Atlases of characteristic...Gravitational methods cover both sedimentation and elutration techniques. Inertial type separators perform cyclonic classification. Ferrography is the...generated after each size exposure of contaminant. This can be done today using Ferrography . Standard contaminant sensitivity tests require test
Zhao, H; Stephens, B
2017-01-01
Much of human exposure to particulate matter of outdoor origin occurs inside buildings, particularly in residences. The particle penetration factor through leaks in a building's exterior enclosure assembly is a key parameter that governs the infiltration of outdoor particles. However, experimental data for size-resolved particle penetration factors in real buildings, as well as penetration factors for fine particles less than 2.5 μm (PM 2.5 ) and ultrafine particles less than 100 nm (UFPs), remain limited, in part because of previous limitations in instrumentation and experimental methods. Here, we report on the development and application of a modified test method that utilizes portable particle sizing instrumentation to measure size-resolved infiltration factors and envelope penetration factors for 0.01-2.5 μm particles, which are then used to estimate penetration factors for integral measures of UFPs and PM 2.5 . Eleven replicate measurements were made in an unoccupied apartment unit in Chicago, IL to evaluate the accuracy and repeatability of the test procedure and solution methods. Mean estimates of size-resolved penetration factors ranged from 0.41 ± 0.14 to 0.73 ± 0.05 across the range of measured particle sizes, while mean estimates of penetration factors for integral measures of UFPs and PM 2.5 were 0.67 ± 0.05 and 0.73 ± 0.05, respectively. Average relative uncertainties for all particle sizes/classes were less than 20%. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Knox, R. J.
1978-01-01
Embryonic kidney cells were studied as a follow-up to the MA-011 Electrophoresis Technology Experiment which was conducted during the Apollo Soyuz Test Project (ASTP). The postflight analysis of the performance of the ASTP zone electrophoresis experiment involving embryonic kidney cells is reported. The feasibility of producing standard particles for electrophoresis was also studied. This work was undertaken in response to a need for standardization of methods for producing, calibrating, and storing electrophoretic particle standards which could be employed in performance tests of various types of electrophoresis equipment. Promising procedures were tested for their suitability in the production of standard test particles from red blood cells.
Lagrangian particle method for compressible fluid dynamics
NASA Astrophysics Data System (ADS)
Samulyak, Roman; Wang, Xingyu; Chen, Hsin-Chiang
2018-06-01
A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface/multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremal points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free interfaces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order. The method is generalizable to coupled hyperbolic-elliptic systems. Numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.
Second order upwind Lagrangian particle method for Euler equations
Samulyak, Roman; Chen, Hsin -Chiang; Yu, Kwangmin
2016-06-01
A new second order upwind Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) an upwind second-order particle-based algorithm with limiter, providing accuracy and longmore » term stability, and (c) accurate resolution of states at free interfaces. In conclusion, numerical verification tests demonstrating the convergence order for fixed domain and free surface problems are presented.« less
Second order upwind Lagrangian particle method for Euler equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samulyak, Roman; Chen, Hsin -Chiang; Yu, Kwangmin
A new second order upwind Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) an upwind second-order particle-based algorithm with limiter, providing accuracy and longmore » term stability, and (c) accurate resolution of states at free interfaces. In conclusion, numerical verification tests demonstrating the convergence order for fixed domain and free surface problems are presented.« less
State estimation and prediction using clustered particle filters.
Lee, Yoonsang; Majda, Andrew J
2016-12-20
Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors.
State estimation and prediction using clustered particle filters
Lee, Yoonsang; Majda, Andrew J.
2016-01-01
Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors. PMID:27930332
Particle drag history in a subcritical post-shock flow - data analysis method and uncertainty
NASA Astrophysics Data System (ADS)
Ding, Liuyang; Bordoloi, Ankur; Adrian, Ronald; Prestridge, Kathy; Arizona State University Team; Los Alamos National Laboratory Team
2017-11-01
A novel data analysis method for measuring particle drag in an 8-pulse particle tracking velocimetry-accelerometry (PTVA) experiment is described. We represented the particle drag history, CD(t) , using polynomials up to the third order. An analytical model for continuous particle position history was derived by integrating an equation relating CD(t) with particle velocity and acceleration. The coefficients of CD(t) were then calculated by fitting the position history model to eight measured particle locations in the sense of least squares. A preliminary test with experimental data showed that the new method yielded physically more reasonable particle velocity and acceleration history compared to conventionally adopted polynomial fitting. To fully assess and optimize the performance of the new method, we performed a PTVA simulation by assuming a ground truth of particle motion based on an ensemble of experimental data. The results indicated a significant reduction in the RMS error of CD. We also found that for particle locating noise between 0.1 and 3 pixels, a range encountered in our experiment, the lowest RMS error was achieved by using the quadratic CD(t) model. Furthermore, we will also discuss the optimization of the pulse timing configuration.
NASA Astrophysics Data System (ADS)
Iliescu, Ciprian; Tresset, Guillaume; Xu, Guolin
2007-06-01
This letter presents a dielectrophoretic (DEP) separation method of particles under continuous flow. The method consists of flowing two particle populations through a microfluidic channel, in which the vertical walls are the electrodes of the DEP device. The irregular shape of the electrodes generates both electric field and fluid velocity gradients. As a result, the particles that exhibit negative DEP can be trapped in the fluidic dead zones, while the particles that experience positive DEP are concentrated in the regions with high velocity and collected at the outlet. The device was tested with dead and living yeast cells.
Statistical analysis of particle trajectories in living cells
NASA Astrophysics Data System (ADS)
Briane, Vincent; Kervrann, Charles; Vimond, Myriam
2018-06-01
Recent advances in molecular biology and fluorescence microscopy imaging have made possible the inference of the dynamics of molecules in living cells. Such inference allows us to understand and determine the organization and function of the cell. The trajectories of particles (e.g., biomolecules) in living cells, computed with the help of object tracking methods, can be modeled with diffusion processes. Three types of diffusion are considered: (i) free diffusion, (ii) subdiffusion, and (iii) superdiffusion. The mean-square displacement (MSD) is generally used to discriminate the three types of particle dynamics. We propose here a nonparametric three-decision test as an alternative to the MSD method. The rejection of the null hypothesis, i.e., free diffusion, is accompanied by claims of the direction of the alternative (subdiffusion or superdiffusion). We study the asymptotic behavior of the test statistic under the null hypothesis and under parametric alternatives which are currently considered in the biophysics literature. In addition, we adapt the multiple-testing procedure of Benjamini and Hochberg to fit with the three-decision-test setting, in order to apply the test procedure to a collection of independent trajectories. The performance of our procedure is much better than the MSD method as confirmed by Monte Carlo experiments. The method is demonstrated on real data sets corresponding to protein dynamics observed in fluorescence microscopy.
Research on mining truck vibration control based on particle damping
NASA Astrophysics Data System (ADS)
Liming, Song; Wangqiang, Xiao; Zeguang, Li; Haiquan, Guo; Zhe, Yang
2018-03-01
More and more attentions were got by people about the research on mining truck driving comfort. As the vibration transfer terminal, cab is one of the important part of mining truck vibration control. In this paper, based on particle damping technology and its application characteristics, through the discrete element modeling, DEM & FEM coupling simulation and analysis, lab test verification and actual test in the truck, particle damping technology was successfully used in driver’s seat base of mining truck, cab vibration was reduced obviously, meanwhile applied research and method of particle damping technology in mining truck vibration control were provided.
Velaga, Sitaram P; Djuris, Jelena; Cvijic, Sandra; Rozou, Stavroula; Russo, Paola; Colombo, Gaia; Rossi, Alessandra
2018-02-15
In vitro dissolution testing is routinely used in the development of pharmaceutical products. Whilst the dissolution testing methods are well established and standardized for oral dosage forms, i.e. tablets and capsules, there are no pharmacopoeia methods or regulatory requirements for testing the dissolution of orally inhaled powders. Despite this, a wide variety of dissolution testing methods for orally inhaled powders has been developed and their bio-relevance has been evaluated. This review provides an overview of the in vitro dissolution methodologies for dry inhalation products, with particular emphasis on dry powder inhalers, where the dissolution behavior of the respirable particles can have a role on duration and absorption of the drug. Dissolution mechanisms of respirable particles as well as kinetic models have been presented. A more recent biorelevant dissolution set-ups and media for studying inhalation biopharmaceutics were also reviewed. In addition, factors affecting interplay between dissolution and absorption of deposited particles in the context of biopharmaceutical considerations of inhalation products were examined. Copyright © 2017 Elsevier B.V. All rights reserved.
Point-particle method to compute diffusion-limited cellular uptake.
Sozza, A; Piazza, F; Cencini, M; De Lillo, F; Boffetta, G
2018-02-01
We present an efficient point-particle approach to simulate reaction-diffusion processes of spherical absorbing particles in the diffusion-limited regime, as simple models of cellular uptake. The exact solution for a single absorber is used to calibrate the method, linking the numerical parameters to the physical particle radius and uptake rate. We study the configurations of multiple absorbers of increasing complexity to examine the performance of the method by comparing our simulations with available exact analytical or numerical results. We demonstrate the potential of the method to resolve the complex diffusive interactions, here quantified by the Sherwood number, measuring the uptake rate in terms of that of isolated absorbers. We implement the method in a pseudospectral solver that can be generalized to include fluid motion and fluid-particle interactions. As a test case of the presence of a flow, we consider the uptake rate by a particle in a linear shear flow. Overall, our method represents a powerful and flexible computational tool that can be employed to investigate many complex situations in biology, chemistry, and related sciences.
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.; Shivarama, Ravishankar
2004-01-01
The hybrid particle-finite element method of Fahrenthold and Horban, developed for the simulation of hypervelocity impact problems, has been extended to include new formulations of the particle-element kinematics, additional constitutive models, and an improved numerical implementation. The extended formulation has been validated in three dimensional simulations of published impact experiments. The test cases demonstrate good agreement with experiment, good parallel speedup, and numerical convergence of the simulation results.
Sumire Kawamoto; James H. Muehl; R. Sam Williams
2005-01-01
Properties of particleboard manufactured entirely from recycled particleboard were tested The method for processing three-layer particleboard from all-recycled particles was described. Dynamic MOE (modulus of elasticity) before and after re-manufacturing was tested by a longitudinal stress wave technique. Some stress wave techniques were compared. Nondestructive AU (...
NASA Astrophysics Data System (ADS)
Morovvati, M. R.; Lalehpour, A.; Esmaeilzare, A.
2016-12-01
Reinforcing aluminum with SiC and B4C nano/micro particles can lead to a more efficient material in terms of strength and light weight. The influence of adding these particles to an aluminum 7075 matrix is investigated using chevron-notch fracture toughness test method. The reinforcing factors are type, size (micro/nano), and weight percent of the particles. The fracture parameters are maximum load, notch opening displacement, the work up to fracture and chevron notch plane strain fracture toughness. The findings demonstrate that addition of micro and nano size particles improves the fracture properties; however, increasing the weight percent of the particles leads to increase of fracture properties up to a certain level and after that due to agglomeration of the particles, the improvement does not happen for both particle types and size categories. Agglomeration of particles at higher amounts of reinforcing particles results in improper distribution of particles and reduction in mechanical properties.
NASA Astrophysics Data System (ADS)
Bagheri, G.; Bonadonna, C.; Manzella, I.; Pontelandolfo, P.; Haas, P.
2012-12-01
A complete understanding and parameterization of both particle sedimentation and particle aggregation require systematic and detailed laboratory investigations performed in controlled conditions. For this purpose, a dedicated 4-meter-high vertical wind tunnel has been designed and constructed at the University of Geneva in collaboration with the Groupe de compétence en mécanique des fluides et procédés énergétiques (CMEFE). Final design is a result of Computational Fluid Dynamics simulations combined with laboratory tests. With its diverging test section, the tunnel is designed to suspend particles of different shapes and sizes in order to study the aero-dynamical behavior of volcanic particles and their collision and aggregation. In current set-up, velocities between 5.0 to 27 ms-1 can be obtained, which correspond to typical volcanic particles with diameters between 10 to 40 mm. A combination of Particle Tracking Velocimetry (PTV) and statistical methods is used to derive particle terminal velocity. The method is validated using smooth spherical particles with known drag coefficient. More than 120 particles of different shapes (i.e. spherical, regular and volcanic) and compositions are 3D-scanned and almost 1 million images of their suspension in the test section of wind tunnel are recorded by a high speed camera and analyzed by a PTV code specially developed for the wind tunnel. Measured values of terminal velocity for tested particles are between 3.6 and 24.9 ms-1 which corresponds to Reynolds numbers between 8×103 and 1×105. In addition to the vertical wind tunnel, an apparatus with height varying between 0.5 and 3.5 m has been built to measure terminal velocity of micrometric particles in Reynolds number between 4 and 100. In these experiments, particles are released individually in the air at top of the apparatus and their terminal velocities are measured at the bottom of apparatus by a combination of high-speed camera imaging and PTV post-analyzing. Effects of shape, porosity and orientation of the particles on their terminal velocity are studied. Various shape factors are measured based on different methods, such as 3D-scanning, 2D-image processing, SEM image analysis, caliper measurements, pycnometer and buoyancy tests. Our preliminary experiments on non-smooth spherical particles and irregular particles reveal some interesting aspects. First, the effect of surface roughness and porosity is more important for spherical particles than for regular non-spherical and irregular particles. Second, results underline how, the aero-dynamical behavior of individual irregular particles is better characterized by a range of values of drag coefficients instead of a single value. Finally, since all the shape factors are calculated precisely for each individual particle, the resulted database can provide important information to benchmark and improve existing terminal-velocity models. Modifications of the wind tunnel, i.e. very low air speed (0.03-5.0 ms-1) for suspension of micrometric particles, and of the PTV code, i.e. multiple particle tracking and collision counting, have also been performed in combination to the installation of a particle charging device, a controlled humidifier and a high-power chiller (to reach values down to -20 °C) in order to investigate both wet and dry aggregation of volcanic particles.
Scaling effects in direct shear tests
Orlando, A.D.; Hanes, D.M.; Shen, H.H.
2009-01-01
Laboratory experiments of the direct shear test were performed on spherical particles of different materials and diameters. Results of the bulk friction vs. non-dimensional shear displacement are presented as a function of the non-dimensional particle diameter. Simulations of the direct shear test were performed using the Discrete Element Method (DEM). The simulation results show Considerable differences with the physical experiments. Particle level material properties, such as the coefficients of static friction, restitution and rolling friction need to be known a priori in order to guarantee that the simulation results are an accurate representation of the physical phenomenon. Furthermore, laboratory results show a clear size dependency on the results, with smaller particles having a higher bulk friction than larger ones. ?? 2009 American Institute of Physics.
Ion and impurity transport in turbulent, anisotropic magnetic fields
NASA Astrophysics Data System (ADS)
Negrea, M.; Petrisor, I.; Isliker, H.; Vogiannou, A.; Vlahos, L.; Weyssow, B.
2011-08-01
We investigate ion and impurity transport in turbulent, possibly anisotropic, magnetic fields. The turbulent magnetic field is modeled as a correlated stochastic field, with Gaussian distribution function and prescribed spatial auto-correlation function, superimposed onto a strong background field. The (running) diffusion coefficients of ions are determined in the three-dimensional environment, using two alternative methods, the semi-analytical decorrelation trajectory (DCT) method, and test-particle simulations. In a first step, the results of the test-particle simulations are compared with and used to validate the results obtained from the DCT method. For this purpose, a drift approximation was made in slab geometry, and relatively good qualitative agreement between the DCT method and the test-particle simulations was found. In a second step, the ion species He, Be, Ne and W, all assumed to be fully ionized, are considered under ITER-like conditions, and the scaling of their diffusivities is determined with respect to varying levels of turbulence (varying Kubo number), varying degrees of anisotropy of the turbulent structures and atomic number. In a third step, the test-particle simulations are repeated without drift approximation, directly using the Lorentz force, first in slab geometry, in order to assess the finite Larmor radius effects, and second in toroidal geometry, to account for the geometric effects. It is found that both effects are important, most prominently the effects due to toroidal geometry and the diffusivities are overestimated in slab geometry by an order of magnitude.
The local strength of individual alumina particles
NASA Astrophysics Data System (ADS)
Pejchal, Václav; Fornabaio, Marta; Žagar, Goran; Mortensen, Andreas
2017-12-01
We implement the C-shaped sample test method and micro-cantilever beam testing to measure the local strength of microscopic, low-aspect-ratio ceramic particles, namely high-purity vapor grown α-alumina Sumicorundum® particles 15-30 μm in diameter, known to be attractive reinforcing particles for aluminum. Individual particles are shaped by focused ion beam micromachining so as to probe in tension a portion of the particle surface that is left unaffected by ion-milling. Mechanical testing of C-shaped specimens is done ex-situ using a nanoindentation apparatus, and in the SEM using an in-situ nanomechanical testing system for micro-cantilever beams. The strength is evaluated for each individual specimen using bespoke finite element simulation. Results show that, provided the particle surface is free of readily observable defects such as pores, twins or grain boundaries and their associated grooves, the particles can achieve local strength values that approach those of high-perfection single-crystal alumina whiskers, on the order of 10 GPa, outperforming high-strength nanocrystalline alumina fibers and nano-thick alumina platelets used in bio-inspired composites. It is also shown that by far the most harmful defects are grain boundaries, leading to the general conclusion that alumina particles must be single-crystalline or alternatively nanocrystalline to fully develop their potential as a strong reinforcing phase in composite materials.
Testing approximate predictions of displacements of cosmological dark matter halos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munari, Emiliano; Monaco, Pierluigi; Borgani, Stefano
We present a test to quantify how well some approximate methods, designed to reproduce the mildly non-linear evolution of perturbations, are able to reproduce the clustering of DM halos once the grouping of particles into halos is defined and kept fixed. The following methods have been considered: Lagrangian Perturbation Theory (LPT) up to third order, Truncated LPT, Augmented LPT, MUSCLE and COLA. The test runs as follows: halos are defined by applying a friends-of-friends (FoF) halo finder to the output of an N-body simulation. The approximate methods are then applied to the same initial conditions of the simulation, producing formore » all particles displacements from their starting position and velocities. The position and velocity of each halo are computed by averaging over the particles that belong to that halo, according to the FoF halo finder. This procedure allows us to perform a well-posed test of how clustering of the matter density and halo density fields are recovered, without asking to the approximate method an accurate reconstruction of halos. We have considered the results at z =0,0.5,1, and we have analysed power spectrum in real and redshift space, object-by-object difference in position and velocity, density Probability Distribution Function (PDF) and its moments, phase difference of Fourier modes. We find that higher LPT orders are generally able to better reproduce the clustering of halos, while little or no improvement is found for the matter density field when going to 2LPT and 3LPT. Augmentation provides some improvement when coupled with 2LPT, while its effect is limited when coupled with 3LPT. Little improvement is brought by MUSCLE with respect to Augmentation. The more expensive particle-mesh code COLA outperforms all LPT methods, and this is true even for mesh sizes as large as the inter-particle distance. This test sets an upper limit on the ability of these methods to reproduce the clustering of halos, for the cases when these objects are reconstructed at the object-by-object level.« less
Testing approximate predictions of displacements of cosmological dark matter halos
NASA Astrophysics Data System (ADS)
Munari, Emiliano; Monaco, Pierluigi; Koda, Jun; Kitaura, Francisco-Shu; Sefusatti, Emiliano; Borgani, Stefano
2017-07-01
We present a test to quantify how well some approximate methods, designed to reproduce the mildly non-linear evolution of perturbations, are able to reproduce the clustering of DM halos once the grouping of particles into halos is defined and kept fixed. The following methods have been considered: Lagrangian Perturbation Theory (LPT) up to third order, Truncated LPT, Augmented LPT, MUSCLE and COLA. The test runs as follows: halos are defined by applying a friends-of-friends (FoF) halo finder to the output of an N-body simulation. The approximate methods are then applied to the same initial conditions of the simulation, producing for all particles displacements from their starting position and velocities. The position and velocity of each halo are computed by averaging over the particles that belong to that halo, according to the FoF halo finder. This procedure allows us to perform a well-posed test of how clustering of the matter density and halo density fields are recovered, without asking to the approximate method an accurate reconstruction of halos. We have considered the results at z=0,0.5,1, and we have analysed power spectrum in real and redshift space, object-by-object difference in position and velocity, density Probability Distribution Function (PDF) and its moments, phase difference of Fourier modes. We find that higher LPT orders are generally able to better reproduce the clustering of halos, while little or no improvement is found for the matter density field when going to 2LPT and 3LPT. Augmentation provides some improvement when coupled with 2LPT, while its effect is limited when coupled with 3LPT. Little improvement is brought by MUSCLE with respect to Augmentation. The more expensive particle-mesh code COLA outperforms all LPT methods, and this is true even for mesh sizes as large as the inter-particle distance. This test sets an upper limit on the ability of these methods to reproduce the clustering of halos, for the cases when these objects are reconstructed at the object-by-object level.
Nanoparticle generation and interactions with surfaces in vacuum systems
NASA Astrophysics Data System (ADS)
Khopkar, Yashdeep
Extreme ultraviolet lithography (EUVL) is the most likely candidate as the next generation technology beyond immersion lithography to be used in high volume manufacturing in the semiconductor industry. One of the most problematic areas in the development process is the fabrication of mask blanks used in EUVL. As the masks are reflective, there is a chance that any surface aberrations in the form of bumps or pits could be printed on the silicon wafers. There is a strict tolerance to the number density of such defects on the mask that can be used in the final printing process. Bumps on the surface could be formed when particles land on the mask blank surface during the deposition of multiple bi-layers of molybdenum and silicon. To identify, and possibly mitigate the source of particles during mask fabrication, SEMATECH investigated particle generation in the VEECO Nexus deposition tool. They found several sources of particles inside the tool such as valves. To quantify the particle generation from vacuum components, a test bench suitable for evaluating particle generation in the sub-100 nm particle size range was needed. The Nanoparticle test bench at SUNY Polytechnic Institute was developed as a sub-set of the overall SEMATECH suite of metrology tools used to identify and quantify sources of particles inside process tools that utilize these components in the semiconductor industry. Vacuum valves were tested using the test bench to investigate the number, size and possible sources of particles inside the valves. Ideal parameters of valve operation were also investigated using a 300-mm slit valve with the end goal of finding optimized parameters for minimum particle generation. SEMATECH also pursued the development of theoretical models of particle transport replicating the expected conditions in an ion beam deposition chamber assuming that the particles were generated. In the case of the ion beam deposition tool used in the mask blank fabrication process, the ion beam in the tool could significantly accelerate particles. Assuming that these particles are transported to various surfaces inside the deposition tool, the next challenge is to enhance the adhesion of the particles on surfaces that are located in the non-critical areas inside the tool. However, for particles in the sub-100 nm size range, suitable methods do not exist that can compare the adhesion probability of particles upon impact for a wide range of impact velocities, surfaces and particle types. Traditional methods, which rely on optical measurement of particle velocities in the micron-size regime, cannot be used for sub-100 nm particles as the particles do not scatter sufficient light for the detectors to function. All the current methods rely on electrical measurements taken from impacting particles onto a surface. However, for sub-100 nm particles, the impact velocity varies in different regions of the same impaction spot. Therefore, electrical measurements are inadequate to quantify the exact adhesion characteristics at different impact velocities to enable a comparison of multiple particle-surface systems. Therefore, we propose a new method based on the use of scanning electron microscopy (SEM) imaging to study the adhesion of particles upon impact on surfaces. The use of SEM imaging allows for single particle detection across a single impaction spot and, therefore, enables the comparison of different regions with different impact velocities in a single impaction spot. The proposed method will provide comprehensive correlation between the adhesion probability of sub-100 nm particles and a wide range of impact velocities and angles. The location of each particle is compared with impact velocity predicted by using computational fluid dynamics methods to generate a comprehensive adhesion map involving the impact of 70 nm particles on a polished surface across a large impact velocity range. The final adhesion probability map shows higher adhesion at oblique impact angles compared to normal incidence impacts. Theoretical and experiments with micron-sized particles have shown that the contact area between the particle and the surface decreases at lower incidence angles which results in a decrease in the adhesion probability of the particle. The most likely cause of this result was the role of plastic deformation of particles and its effect on adhesion. Therefore, 70 nm sucrose particles were also impacted under similar impaction conditions to compare the role of plastic deformation on the adhesion characteristics of a particle. Sucrose particles have approximately 10 times more modulus of elasticity than Polystyrene Latex (PSL) particles and were found to have almost no adhesion on the surface at the same impact velocities where the highest adhesion of PSL particles was measured. Besides the role of plastic deformation, the influence of other possible errors in this process was investigated but not found to be significant. (Abstract shortened by UMI.).
Fast detection of air contaminants using immunobiological methods
NASA Astrophysics Data System (ADS)
Schmitt, Katrin; Bolwien, Carsten; Sulz, Gerd; Koch, Wolfgang; Dunkhorst, Wilhelm; Lödding, Hubert; Schwarz, Katharina; Holländer, Andreas; Klockenbring, Torsten; Barth, Stefan; Seidel, Björn; Hofbauer, Wolfgang; Rennebarth, Torsten; Renzl, Anna
2009-05-01
The fast and direct identification of possibly pathogenic microorganisms in air is gaining increasing interest due to their threat for public health, e.g. in clinical environments or in clean rooms of food or pharmaceutical industries. We present a new detection method allowing the direct recognition of relevant germs or bacteria via fluorescence-labeled antibodies within less than one hour. In detail, an air-sampling unit passes particles in the relevant size range to a substrate which contains antibodies with fluorescence labels for the detection of a specific microorganism. After the removal of the excess antibodies the optical detection unit comprising reflected-light and epifluorescence microscopy can identify the microorganisms by fast image processing on a single-particle level. First measurements with the system to identify various test particles as well as interfering influences have been performed, in particular with respect to autofluorescence of dust particles. Specific antibodies for the detection of Aspergillus fumigatus spores have been established. The biological test system consists of protein A-coated polymer particles which are detected by a fluorescence-labeled IgG. Furthermore the influence of interfering particles such as dust or debris is discussed.
Budimir, Stjepan; Setälä, Outi; Lehtiniemi, Maiju
2018-02-01
Although the presence of microplastics in marine biota has been widely recorded, extraction methods, method validation and approaches to monitoring are not standardized. In this study a method for microplastic extraction from fish guts based on a chemical alkaline digestion is presented. The average particle retrieval rate from spiked fish guts, used for method validation, was 84%. The weight and shape of the test particles (PET, PC, HD-PE) were also analysed with no noticeable changes in any particle shapes and only minor weight change in PET (2.63%). Microplastics were found in 1.8% of herrings (n=164) and in 0.9% of sprat (n=154). None of the three-spined sticklebacks (n=355) contained microplastic particles. Copyright © 2018 Elsevier Ltd. All rights reserved.
Ultrafine particle emission characteristics of diesel engine by on-board and test bench measurement.
Huang, Cheng; Lou, Diming; Hu, Zhiyuan; Tan, Piqiang; Yao, Di; Hu, Wei; Li, Peng; Ren, Jin; Chen, Changhong
2012-01-01
This study investigated the emission characteristics of ultrafine particles based on test bench and on-board measurements. The bench test results showed the ultrafine particle number concentration of the diesel engine to be in the range of (0.56-8.35) x 10(8) cm(-3). The on-board measurement results illustrated that the ultrafine particles were strongly correlated with changes in real-world driving cycles. The particle number concentration was down to 2.0 x 10(6) cm(-3) and 2.7 x 10(7) cm(-3) under decelerating and idling operations and as high as 5.0 x 10(8) cm(-3) under accelerating operation. It was also indicated that the particle number measured by the two methods increased with the growth of engine load at each engine speed in both cases. The particle number presented a "U" shaped distribution with changing speed at high engine load conditions, which implies that the particle number will reach its lowest level at medium engine speeds. The particle sizes of both measurements showed single mode distributions. The peak of particle size was located at about 50-80 nm in the accumulation mode particle range. Nucleation mode particles will significantly increase at low engine load operations like idling and decelerating caused by the high concentration of unburned organic compounds.
Comparison of different methods used in integral codes to model coagulation of aerosols
NASA Astrophysics Data System (ADS)
Beketov, A. I.; Sorokin, A. A.; Alipchenkov, V. M.; Mosunova, N. A.
2013-09-01
The methods for calculating coagulation of particles in the carrying phase that are used in the integral codes SOCRAT, ASTEC, and MELCOR, as well as the Hounslow and Jacobson methods used to model aerosol processes in the chemical industry and in atmospheric investigations are compared on test problems and against experimental results in terms of their effectiveness and accuracy. It is shown that all methods are characterized by a significant error in modeling the distribution function for micrometer particles if calculations are performed using rather "coarse" spectra of particle sizes, namely, when the ratio of the volumes of particles from neighboring fractions is equal to or greater than two. With reference to the problems considered, the Hounslow method and the method applied in the aerosol module used in the ASTEC code are the most efficient ones for carrying out calculations.
Single scattering from nonspherical Chebyshev particles: A compendium of calculations
NASA Technical Reports Server (NTRS)
Wiscombe, W. J.; Mugnai, A.
1986-01-01
A large set of exact calculations of the scattering from a class of nonspherical particles known as Chebyshev particles' has been performed. Phase function and degree of polarization in random orientation, and parallel and perpendicular intensities in fixed orientations, are plotted for a variety of particles shapes and sizes. The intention is to furnish a data base against which both experimental data, and the predictions of approximate methods, can be tested. The calculations are performed with the widely-used Extended Boundary Condition Method. An extensive discussion of this method is given, including much material that is not easily available elsewhere (especially the analysis of its convergence properties). An extensive review is also given of all extant methods for nonspherical scattering calculations, as well as of the available pool of experimental data.
Attempt to form hydride and amorphous particles, and introduction of a new evaporation method
NASA Astrophysics Data System (ADS)
Yatsuya, S.; Yamauchi, K.; Kamakura, T.; Yanagida, A.; Wakayama, H.; Mihama, K.
1985-06-01
Al and TiH 2 particles of fcc structure can be produced in an atmosphere of gaseous H 2 at reduced pressure. Al particles with definite habit are obtained, which has been never observed in the ordinary gas evaporation technique using a HV system. The habit of TiH 2 particles grown in the intermediate zone of the smoke is determined to be a dodecahedron. The growth is considered as the result of the martensite transformation from the bcc structure initially formed to the fcc structure accompanying a slight modification of the characteristic habit as observed for Ti particles. For the preparation of amorphous particles, first, the quenching rate of a particle, {dT}/{dt} was estimated to be more than {10 4°C }/{s}. Ultrafine particles of Pd 80Si 20 chosen as a test sample did not show the amorphous structure, but the crystalline. Application of the sputtering method as a new evaporation source in the gas evaporation technique is attempted. With the sputtering method, W particles with definite habits are produced.
2008-07-01
EPA emission standards, the EPA has also specified the measurement methods . According to EPA, the most accurate and precise method of determining ...function of particle size and refractive index . If particle size distributions and refractive indices in diesel exhaust strongly depend on the...to correct the bias of the raw SFTM data and align the data with the values determined by the federal reference method . Thus, to use these methods a
Variable Threshold Method for Determining the Boundaries of Imaged Subvisible Particles.
Cavicchi, Richard E; Collett, Cayla; Telikepalli, Srivalli; Hu, Zhishang; Carrier, Michael; Ripple, Dean C
2017-06-01
An accurate assessment of particle characteristics and concentrations in pharmaceutical products by flow imaging requires accurate particle sizing and morphological analysis. Analysis of images begins with the definition of particle boundaries. Commonly a single threshold defines the level for a pixel in the image to be included in the detection of particles, but depending on the threshold level, this results in either missing translucent particles or oversizing of less transparent particles due to the halos and gradients in intensity near the particle boundaries. We have developed an imaging analysis algorithm that sets the threshold for a particle based on the maximum gray value of the particle. We show that this results in tighter boundaries for particles with high contrast, while conserving the number of highly translucent particles detected. The method is implemented as a plugin for FIJI, an open-source image analysis software. The method is tested for calibration beads in water and glycerol/water solutions, a suspension of microfabricated rods, and stir-stressed aggregates made from IgG. The result is that appropriate thresholds are automatically set for solutions with a range of particle properties, and that improved boundaries will allow for more accurate sizing results and potentially improved particle classification studies. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kagan, Daniel; Nakar, Ehud; Piran, Tsvi, E-mail: daniel.kagan@mail.huji.ac.il
The maximum synchrotron burnoff limit of 160 MeV represents a fundamental limit to radiation resulting from electromagnetic particle acceleration in one-zone ideal plasmas. In magnetic reconnection, however, particle acceleration and radiation are decoupled because the electric field is larger than the magnetic field in the diffusion region. We carry out two-dimensional particle-in-cell simulations to determine the extent to which magnetic reconnection can produce synchrotron radiation above the burnoff limit. We use the test particle comparison (TPC) method to isolate the effects of cooling by comparing the trajectories and acceleration efficiencies of test particles incident on such a reconnection region withmore » and without cooling them. We find that the cooled and uncooled particle trajectories are typically similar during acceleration in the reconnection region, and derive an effective limit on particle acceleration that is inversely proportional to the average magnetic field experienced by the particle during acceleration. Using the calculated distribution of this average magnetic field as a function of uncooled final particle energy, we find analytically that cooling does not affect power-law particle energy spectra except at energies far above the synchrotron burnoff limit. Finally, we compare fully cooled and uncooled simulations of reconnection, confirming that the synchrotron burnoff limit does not produce a cutoff in the particle energy spectrum. Our results indicate that the TPC method accurately predicts the effects of cooling on particle acceleration in relativistic reconnection, and that, even far above the burnoff limit, the synchrotron energy of radiation produced in reconnection is not limited by cooling.« less
NASA Astrophysics Data System (ADS)
Luo, D. M.; Xie, Y.; Su, X. R.; Zhou, Y. L.
2018-01-01
Based on the four classical models of Mooney-Rivlin (M-R), Yeoh, Ogden and Neo-Hookean (N-H) model, a strain energy constitutive equation with large deformation for rubber composites reinforced with random ceramic particles is proposed from the angle of continuum mechanics theory in this paper. By decoupling the interaction between matrix and random particles, the strain energy of each phase is obtained to derive the explicit constitutive equation for rubber composites. The tests results of uni-axial tensile, pure shear and equal bi-axial tensile are simulated by the non-linear finite element method on the ANSYS platform. The results from finite element method are compared with those from experiment, and the material parameters are determined by fitting the results from different test conditions, and the influence of radius of random ceramic particles on the effective mechanical properties are analyzed.
Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization
Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
Reducing the anisotropy of a Brazilian disc generated in a bonded-particle model
NASA Astrophysics Data System (ADS)
Zhang, Q.; Zhang, X. P.; Ji, P. Q.
2018-03-01
The Brazilian test is a widely used method for determining the tensile strength of rocks and for calibrating parameters in bonded-particle models (BPMs). In previous studies, the Brazilian disc has typically been trimmed from a compacted rectangular specimen. The present study shows that different tensile strength values are obtained depending on the compressive loading direction. Several measures are proposed to reduce the anisotropy of the disc. The results reveal that the anisotropy of the disc is significantly influenced by the compactibility of the specimen from which it is trimmed. A new method is proposed in which the Brazilian disc is directly generated with a particle boundary, effectively reducing the anisotropy. The stiffness (particle and bond) and strength (bond) of the boundary are set at less than and greater than those of the disc assembly, respectively, which significantly decreases the stress concentration at the boundary contacts and prevents breakage of the boundary particle bonds. This leads to a significant reduction in the anisotropy of the disc and the discreteness of the tensile strength. This method is more suitable for carrying out a realistic Brazilian test for homogeneous rock-like material in the BPM.
Gerber, H
1986-01-01
In the official method for rodent filth in corn meal, filth and corn meal are separated in organic solvents, and particles are identified by the presence of hair and a mucous coating. The solvents are toxic, poor separation yields low recoveries, and fecal characteristics are rarely present on all fragments, especially on small particles. The official AOAC alkaline phosphatase test for mammalian feces, 44.181-44.184, has therefore been adapted to determine the presence of mammalian feces in corn meal. The enzyme cleaves phosphate radicals from a test indicator/substrate, phenolphthalein diphosphate. As free phenolphthalein accumulates, a pink-to-red color develops in the gelled test agar medium. In a collaborative study conducted to compare the proposed method with the official method for corn meal, 44.049, the proposed method yielded 45.5% higher recoveries than the official method. Repeatability and reproducibility for the official method were roughly 1.8 times more variable than for the proposed method. The method has been adopted official first action.
Lagrangian particle method for compressible fluid dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samulyak, Roman; Wang, Xingyu; Chen, Hsin -Chiang
A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multi-phase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremalmore » points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free inter-faces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order . The method is generalizable to coupled hyperbolic-elliptic systems. As a result, numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.« less
Lagrangian particle method for compressible fluid dynamics
Samulyak, Roman; Wang, Xingyu; Chen, Hsin -Chiang
2018-02-09
A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multi-phase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremalmore » points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free inter-faces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order . The method is generalizable to coupled hyperbolic-elliptic systems. As a result, numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.« less
Chou, Sheng-Kai; Jiau, Ming-Kai; Huang, Shih-Chia
2016-08-01
The growing ubiquity of vehicles has led to increased concerns about environmental issues. These concerns can be mitigated by implementing an effective carpool service. In an intelligent carpool system, an automated service process assists carpool participants in determining routes and matches. It is a discrete optimization problem that involves a system-wide condition as well as participants' expectations. In this paper, we solve the carpool service problem (CSP) to provide satisfactory ride matches. To this end, we developed a particle swarm carpool algorithm based on stochastic set-based particle swarm optimization (PSO). Our method introduces stochastic coding to augment traditional particles, and uses three terminologies to represent a particle: 1) particle position; 2) particle view; and 3) particle velocity. In this way, the set-based PSO (S-PSO) can be realized by local exploration. In the simulation and experiments, two kind of discrete PSOs-S-PSO and binary PSO (BPSO)-and a genetic algorithm (GA) are compared and examined using tested benchmarks that simulate a real-world metropolis. We observed that the S-PSO outperformed the BPSO and the GA thoroughly. Moreover, our method yielded the best result in a statistical test and successfully obtained numerical results for meeting the optimization objectives of the CSP.
Coupled SPH-FV method with net vorticity and mass transfer
NASA Astrophysics Data System (ADS)
Chiron, L.; Marrone, S.; Di Mascio, A.; Le Touzé, D.
2018-07-01
Recently, an algorithm for coupling a Finite Volume (FV) method, that discretize the Navier-Stokes equations on block structured Eulerian grids, with the weakly-compressible Lagrangian Smoothed Particle Hydrodynamics (SPH) was presented in [16]. The algorithm takes advantage of the SPH method to discretize flow regions close to free-surfaces and of the FV method to resolve the bulk flow and the wall regions. The continuity between the two solutions is guaranteed by overlapping zones. Here we extend the algorithm by adding the possibility to have: 1) net mass transfer between the SPH and FV sub-domains; 2) free-surface across the overlapping region. In this context, particle generation at common boundaries is required to prevent depletion or clustering of particles. This operation is not trivial, because consistency between the Lagrangian and Eulerian description of the flow must be retained to ensure mass conservation. We propose here a new coupling paradigm that extends the algorithm developed in [16] and renders it suitable to test cases where vorticity and free surface significantly pass from one domain to the other. On the SPH side, a novel technique for the creation/deletion of particle was developed. On the FV side, the information recovered from the SPH solver are exploited to improve free surface prediction in a fashion that resemble the Particle Level-Set algorithms. The combination of the two new features was tested and validated in a number of test cases where both vorticity and front evolution are important. Convergence and robustness of the algorithm are shown.
Zhao, Hong; Kang, Xu-liang; Chen, Xuan-li; Wang, Jie-xin; Le, Yuan; Shen, Zhi-gang; Chen, Jian-feng
2009-01-01
In vitro and in vivo antibacterial activities on the Staphylococcus aureus and Escherichia coli of the amorphous cefuroxime axetil (CFA) ultrafine particles prepared by HGAP method were investigated in this paper. The conventional sprayed CFA particles were studied as the control group. XRD, SEM, BET tests were performed to investigate the morphology changes of the samples before and after sterile. The in vitro dissolution test, minimal inhibitory concentrations (MIC) and the in vivo experiment on mice were explored. The results demonstrated that: (i) The structure, morphology and amorphous form of the particles could be affected during steam sterile process; (ii) CFA particles with different morphologies showed varied antibacterial activities; and (iii) the in vitro and in vivo antibacterial activities of the ultrafine particles prepared by HGAP is markedly stronger than that of the conventional sprayed amorphous particles.
Dynamic radioactive particle source
Moore, Murray E; Gauss, Adam Benjamin; Justus, Alan Lawrence
2012-06-26
A method and apparatus for providing a timed, synchronized dynamic alpha or beta particle source for testing the response of continuous air monitors (CAMs) for airborne alpha or beta emitters is provided. The method includes providing a radioactive source; placing the radioactive source inside the detection volume of a CAM; and introducing an alpha or beta-emitting isotope while the CAM is in a normal functioning mode.
Particle Morphology Analysis of Biomass Material Based on Improved Image Processing Method
Lu, Zhaolin
2017-01-01
Particle morphology, including size and shape, is an important factor that significantly influences the physical and chemical properties of biomass material. Based on image processing technology, a method was developed to process sample images, measure particle dimensions, and analyse the particle size and shape distributions of knife-milled wheat straw, which had been preclassified into five nominal size groups using mechanical sieving approach. Considering the great variation of particle size from micrometer to millimeter, the powders greater than 250 μm were photographed by a flatbed scanner without zoom function, and the others were photographed using a scanning electron microscopy (SEM) with high-image resolution. Actual imaging tests confirmed the excellent effect of backscattered electron (BSE) imaging mode of SEM. Particle aggregation is an important factor that affects the recognition accuracy of the image processing method. In sample preparation, the singulated arrangement and ultrasonic dispersion methods were used to separate powders into particles that were larger and smaller than the nominal size of 250 μm. In addition, an image segmentation algorithm based on particle geometrical information was proposed to recognise the finer clustered powders. Experimental results demonstrated that the improved image processing method was suitable to analyse the particle size and shape distributions of ground biomass materials and solve the size inconsistencies in sieving analysis. PMID:28298925
Particle sizing of pharmaceutical aerosols via direct imaging of particle settling velocities.
Fishler, Rami; Verhoeven, Frank; de Kruijf, Wilbur; Sznitman, Josué
2018-02-15
We present a novel method for characterizing in near real-time the aerodynamic particle size distributions from pharmaceutical inhalers. The proposed method is based on direct imaging of airborne particles followed by a particle-by-particle measurement of settling velocities using image analysis and particle tracking algorithms. Due to the simplicity of the principle of operation, this method has the potential of circumventing potential biases of current real-time particle analyzers (e.g. Time of Flight analysis), while offering a cost effective solution. The simple device can also be constructed in laboratory settings from off-the-shelf materials for research purposes. To demonstrate the feasibility and robustness of the measurement technique, we have conducted benchmark experiments whereby aerodynamic particle size distributions are obtained from several commercially-available dry powder inhalers (DPIs). Our measurements yield size distributions (i.e. MMAD and GSD) that are closely in line with those obtained from Time of Flight analysis and cascade impactors suggesting that our imaging-based method may embody an attractive methodology for rapid inhaler testing and characterization. In a final step, we discuss some of the ongoing limitations of the current prototype and conceivable routes for improving the technique. Copyright © 2017 Elsevier B.V. All rights reserved.
Investigation of methods to produce a uniform cloud of fuel particles in a flame tube
NASA Technical Reports Server (NTRS)
Siegert, Clifford E.; Pla, Frederic G.; Rubinstein, Robert; Niezgoda, Thomas F.; Burns, Robert J.; Johnson, Jerome A.
1990-01-01
The combustion of a uniform, quiescent cloud of 30-micron fuel particles in a flame tube was proposed as a space-based, low-gravity experiment. The subject is the normal- and low-gravity testing of several methods to produce such a cloud, including telescoping propeller fans, air pumps, axial and quadrature acoustical speakers, and combinations of these devices. When operated in steady state, none of the methods produced an acceptably uniform cloud (+ or - 5 percent of the mean concentration), and voids in the cloud were clearly visible. In some cases, severe particle agglomeration was observed; however, these clusters could be broken apart by a short acoustic burst from an axially in-line speaker. Analyses and experiments reported elsewhere suggest that transient, acoustic mixing methods can enhance cloud uniformity while minimizing particle agglomeration.
Particle dispersing system and method for testing semiconductor manufacturing equipment
Chandrachood, Madhavi; Ghanayem, Steve G.; Cantwell, Nancy; Rader, Daniel J.; Geller, Anthony S.
1998-01-01
The system and method prepare a gas stream comprising particles at a known concentration using a particle disperser for moving particles from a reservoir of particles into a stream of flowing carrier gas. The electrostatic charges on the particles entrained in the carrier gas are then neutralized or otherwise altered, and the resulting particle-laden gas stream is then diluted to provide an acceptable particle concentration. The diluted gas stream is then split into a calibration stream and the desired output stream. The particles in the calibration stream are detected to provide an indication of the actual size distribution and concentration of particles in the output stream that is supplied to a process chamber being analyzed. Particles flowing out of the process chamber within a vacuum pumping system are detected, and the output particle size distribution and concentration are compared with the particle size distribution and concentration of the calibration stream in order to determine the particle transport characteristics of a process chamber, or to determine the number of particles lodged in the process chamber as a function of manufacturing process parameters such as pressure, flowrate, temperature, process chamber geometry, particle size, particle charge, and gas composition.
Potthoff, Annegret; Oelschlägel, Kathrin; Schmitt-Jansen, Mechthild; Rummel, Christoph Daniel; Kühnel, Dana
2017-05-01
The presence of microplastic (MP) in the aquatic environment is recognized as a global-scale pollution issue. Secondary MP particles result from an ongoing fragmentation process governed by various biotic and abiotic factors. For a reliable risk assessment of these MP particles, knowledge about interactions with biota is needed. However, extensive testing with standard organisms under reproducible laboratory conditions with well-characterized MP suspensions is not available yet. As MP in the environment represents a mixture of particles differing in properties (e.g., size, color, polymer type, surface characteristics), it is likely that only specific particle fractions pose a threat towards organisms. In order to assign hazardous effects to specific particle properties, these characteristics need to be analyzed. As shown by the testing of particles (e.g. nanoparticles), characteristics other than chemical properties are important for the emergence of toxicity in organisms, and parameters such as surface area or size distribution need consideration. Therefore, the use of "well-defined" particles for ecotoxicological testing (i.e., standard particles) facilitates the establishment of causal links between physical-chemical properties of MP particles and toxic effects in organisms. However, the benefits of well-defined particles under laboratory conditions are offset by the disadvantage of the unknown comparability with MP in the environment. Therefore, weathering effects caused by biological, chemical, physical or mechanical processes have to be considered. To date, the characterization of the progression of MP weathering based on powder and suspension characterization methods is in its infancy. The aim of this commentary is to illustrate the prerequisites for testing MP in the laboratory from 3 perspectives: (i) knowledge of particle properties; (ii) behavior of MP in test setups involving ecotoxicological test organisms; and (iii) accordingly, test conditions that may need adjustment. Only under those prerequisites will reliable hazard assessment of MP be feasible. Integr Environ Assess Manag 2017;13:500-504. © 2017 SETAC. © 2017 SETAC.
Methods for roof-top mini-arrays
NASA Astrophysics Data System (ADS)
Hazen, W. E.; Hazen, E. S.
1985-08-01
To test the idea of the Linsley effect mini array for the study of giant air showers, it is desirable to have a trigger that exploits the effect itself. In addition to the trigger, it is necessary to have a method for measuring the relative arrival times of the particle swarm selected by the trigger. Since the idea of mini arrays is likely to appeal to small research groups, it is desirable to try to design relatively simple and inexpensive methods, and methods that utilize existing detectors. Clusters of small detectors have been designed for operation in the local particle density realm where the probability of or = 2 particles per detector is small. Consequently, this method can discriminate pulses from each detector and thenceforth deal mainly with logic pulses.
Lundberg, Karin; Wu, Lindsey; Papia, Evaggelia
2017-01-01
Abstract Objective: The aim of the study was to make an inventory of current literature on the bond strength between zirconia and veneering porcelain after surface treatment of zirconia by grinding with diamond bur and/or with airborne-particle abrasion. Material and methods: The literature search for the present review was made following recommended guidelines using acknowledged methodology on how to do a systematic review. The electronic databases PubMed, Cochrane Library, and Science Direct were used in the present study. Results: Twelve studies were selected. Test methods used in the original studies included shear bond strength (SBS) test, tensile bond strength test, and micro-tensile bond strength test. The majority of studies used SBS. Results showed a large variation within each surface treatment of zirconia, using different grain size, blasting time, and pressure. Conclusions: Airborne-particle abrasion might improve the bond strength and can therefore be considered a feasible surface treatment for zirconia that is to be bonded. Grinding has been recommended as a surface treatment for zirconia to improve the bond strength; however, this recommendation cannot be verified. A standardized test method and surface treatment are required to be able to compare the results from different studies and draw further conclusions. PMID:28642927
The influence of particle size and curing conditions on testing mineral trioxide aggregate cement.
Ha, William Nguyen; Kahler, Bill; Walsh, Laurence James
2016-12-01
Objectives: To assess the effects on curing conditions (dry versus submerged curing) and particle size on the compressive strength (CS) and flexural strength (FS) of set MTA cement. Materials and methods: Two different Portland cements were created, P1 and P2, with P1 < P2 in particle size. These were then used to create two experimental MTA products, M1 and M2, with M1 < M2 in particle size. Particle size analysis was performed according to ISO 13320. The particle size at the 90th percentile (i.e. the larger particles) was P1: 15.2 μm, P2: 29.1 μm, M1: 16.5 μm, and M2: 37.1 μm. M2 was cured exposed to air, or submerged in fluids of pH 5.0, 7.2 (PBS), or 7.5 for 1 week. CS and FS of the set cement were determined using a modified ISO 9917-1 and ISO 4049 methods, respectively. P1, P2, M1 and M2 were cured in PBS at physiological pH (7.2) and likewise tested for CS and FS. Results: Curing under dry conditions gave a significantly lower CS than when cured in PBS. There was a trend for lower FS for dry versus wet curing. However, this did not reach statistical significance. Cements with smaller particle sizes showed greater CS and FS at 1 day than those with larger particle sizes. However, this advantage was lost over the following 1-3 weeks. Conclusions : Experiments that test the properties of MTA should cure the MTA under wet conditions and at physiological pH.
The influence of particle size and curing conditions on testing mineral trioxide aggregate cement
Ha, William Nguyen; Kahler, Bill; Walsh, Laurence James
2016-01-01
Abstract Objectives: To assess the effects on curing conditions (dry versus submerged curing) and particle size on the compressive strength (CS) and flexural strength (FS) of set MTA cement. Materials and methods: Two different Portland cements were created, P1 and P2, with P1 < P2 in particle size. These were then used to create two experimental MTA products, M1 and M2, with M1 < M2 in particle size. Particle size analysis was performed according to ISO 13320. The particle size at the 90th percentile (i.e. the larger particles) was P1: 15.2 μm, P2: 29.1 μm, M1: 16.5 μm, and M2: 37.1 μm. M2 was cured exposed to air, or submerged in fluids of pH 5.0, 7.2 (PBS), or 7.5 for 1 week. CS and FS of the set cement were determined using a modified ISO 9917-1 and ISO 4049 methods, respectively. P1, P2, M1 and M2 were cured in PBS at physiological pH (7.2) and likewise tested for CS and FS. Results: Curing under dry conditions gave a significantly lower CS than when cured in PBS. There was a trend for lower FS for dry versus wet curing. However, this did not reach statistical significance. Cements with smaller particle sizes showed greater CS and FS at 1 day than those with larger particle sizes. However, this advantage was lost over the following 1–3 weeks. Conclusions: Experiments that test the properties of MTA should cure the MTA under wet conditions and at physiological pH. PMID:28642923
Heating and Acceleration of Charged Particles by Weakly Compressible Magnetohydrodynamic Turbulence
NASA Astrophysics Data System (ADS)
Lynn, Jacob William
We investigate the interaction between low-frequency magnetohydrodynamic (MHD) turbulence and a distribution of charged particles. Understanding this physics is central to understanding the heating of the solar wind, as well as the heating and acceleration of other collisionless plasmas. Our central method is to simulate weakly compressible MHD turbulence using the Athena code, along with a distribution of test particles which feel the electromagnetic fields of the turbulence. We also construct analytic models of transit-time damping (TTD), which results from the mirror force caused by compressible (fast or slow) MHD waves. Standard linear-theory models in the literature require an exact resonance between particle and wave velocities to accelerate particles. The models developed in this thesis go beyond standard linear theory to account for the fact that wave-particle interactions decorrelate over a short time, which allows particles with velocities off resonance to undergo acceleration and velocity diffusion. We use the test particle simulation results to calibrate and distinguish between different models for this velocity diffusion. Test particle heating is larger than the linear theory prediction, due to continued acceleration of particles with velocities off-resonance. We also include an artificial pitch-angle scattering to the test particle motion, representing the effect of high-frequency waves or velocity-space instabilities. For low scattering rates, we find that the scattering enforces isotropy and enhances heating by a modest factor. For much higher scattering rates, the acceleration is instead due to a non-resonant effect, as particles "frozen" into the fluid adiabatically gain and lose energy as eddies expand and contract. Lastly, we generalize our calculations to allow for relativistic test particles. Linear theory predicts that relativistic particles with velocities much higher than the speed of waves comprising the turbulence would undergo no acceleration; resonance-broadening modifies this conclusion and allows for a continued Fermi-like acceleration process. This may affect the observed spectra of black hole accretion disks by accelerating relativistic particles into a quasi-powerlaw tail.
Determination of silica coating efficiency on metal particles using multiple digestion methods.
Wang, Jun; Topham, Nathan; Wu, Chang-Yu
2011-10-15
Nano-sized metal particles, including both elemental and oxidized metals, have received significant interest due to their biotoxicity and presence in a wide range of industrial systems. A novel silica technology has been recently explored to minimize the biotoxicity of metal particles by encapsulating them with an amorphous silica shell. In this study, a method to determine silica coating efficiency on metal particles was developed. Metal particles with silica coating were generated using gas metal arc welding (GMAW) process with a silica precursor tetramethylsilane (TMS) added to the shielding gas. Microwave digestion and Inductively Coupled Plasma-Atomic Emission Spectroscopy (ICP-AES) were employed to solubilize the metal content in the particles and analyze the concentration, respectively. Three acid mixtures were tested to acquire the appropriate digestion method targeting at metals and silica coating. Metal recovery efficiencies of different digestion methods were compared through analysis of spiked samples. HNO(3)/HF mixture was found to be a more aggressive digestion method for metal particles with silica coating. Aqua regia was able to effectively dissolve metal particles not trapped in the silica shell. Silica coating efficiencies were thus calculated based on the measured concentrations following digestion by HNO(3)/HF mixture and aqua regia. The results showed 14-39% of welding fume particles were encapsulated in silica coating under various conditions. This newly developed method could also be used to examine the silica coverage on particles of silica shell/metal core structure in other nanotechnology areas. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ditscherlein, L.; Peuker, U. A.
2017-04-01
For the application of colloidal probe atomic force microscopy at high temperatures (>500 K), stable colloidal probe cantilevers are essential. In this study, two new methods for gluing alumina particles onto temperature stable cantilevers are presented and compared with an existing method for borosilicate particles at elevated temperatures as well as with cp-cantilevers prepared with epoxy resin at room temperature. The durability of the fixing of the particle is quantified with a test method applying high shear forces. The force is calculated with a mechanical model considering both the bending as well as the torsion on the colloidal probe.
Development of a particle method of characteristics (PMOC) for one-dimensional shock waves
NASA Astrophysics Data System (ADS)
Hwang, Y.-H.
2018-03-01
In the present study, a particle method of characteristics is put forward to simulate the evolution of one-dimensional shock waves in barotropic gaseous, closed-conduit, open-channel, and two-phase flows. All these flow phenomena can be described with the same set of governing equations. The proposed scheme is established based on the characteristic equations and formulated by assigning the computational particles to move along the characteristic curves. Both the right- and left-running characteristics are traced and represented by their associated computational particles. It inherits the computational merits from the conventional method of characteristics (MOC) and moving particle method, but without their individual deficiencies. In addition, special particles with dual states deduced to the enforcement of the Rankine-Hugoniot relation are deliberately imposed to emulate the shock structure. Numerical tests are carried out by solving some benchmark problems, and the computational results are compared with available analytical solutions. From the derivation procedure and obtained computational results, it is concluded that the proposed PMOC will be a useful tool to replicate one-dimensional shock waves.
Better Than Counting: Density Profiles from Force Sampling
NASA Astrophysics Data System (ADS)
de las Heras, Daniel; Schmidt, Matthias
2018-05-01
Calculating one-body density profiles in equilibrium via particle-based simulation methods involves counting of events of particle occurrences at (histogram-resolved) space points. Here, we investigate an alternative method based on a histogram of the local force density. Via an exact sum rule, the density profile is obtained with a simple spatial integration. The method circumvents the inherent ideal gas fluctuations. We have tested the method in Monte Carlo, Brownian dynamics, and molecular dynamics simulations. The results carry a statistical uncertainty smaller than that of the standard counting method, reducing therefore the computation time.
Shape classification of wear particles by image boundary analysis using machine learning algorithms
NASA Astrophysics Data System (ADS)
Yuan, Wei; Chin, K. S.; Hua, Meng; Dong, Guangneng; Wang, Chunhui
2016-05-01
The shape features of wear particles generated from wear track usually contain plenty of information about the wear states of a machinery operational condition. Techniques to quickly identify types of wear particles quickly to respond to the machine operation and prolong the machine's life appear to be lacking and are yet to be established. To bridge rapid off-line feature recognition with on-line wear mode identification, this paper presents a new radial concave deviation (RCD) method that mainly involves the use of the particle boundary signal to analyze wear particle features. Signal output from the RCDs subsequently facilitates the determination of several other feature parameters, typically relevant to the shape and size of the wear particle. Debris feature and type are identified through the use of various classification methods, such as linear discriminant analysis, quadratic discriminant analysis, naïve Bayesian method, and classification and regression tree method (CART). The average errors of the training and test via ten-fold cross validation suggest CART is a highly suitable approach for classifying and analyzing particle features. Furthermore, the results of the wear debris analysis enable the maintenance team to diagnose faults appropriately.
NASA Astrophysics Data System (ADS)
Ching, Eric; Lv, Yu; Ihme, Matthias
2017-11-01
Recent interest in human-scale missions to Mars has sparked active research into high-fidelity simulations of reentry flows. A key feature of the Mars atmosphere is the high levels of suspended dust particles, which can not only enhance erosion of thermal protection systems but also transfer energy and momentum to the shock layer, increasing surface heat fluxes. Second-order finite-volume schemes are typically employed for hypersonic flow simulations, but such schemes suffer from a number of limitations. An attractive alternative is discontinuous Galerkin methods, which benefit from arbitrarily high spatial order of accuracy, geometric flexibility, and other advantages. As such, a Lagrangian particle method is developed in a discontinuous Galerkin framework to enable the computation of particle-laden hypersonic flows. Two-way coupling between the carrier and disperse phases is considered, and an efficient particle search algorithm compatible with unstructured curved meshes is proposed. In addition, variable thermodynamic properties are considered to accommodate high-temperature gases. The performance of the particle method is demonstrated in several test cases, with focus on the accurate prediction of particle trajectories and heating augmentation. Financial support from a Stanford Graduate Fellowship and the NASA Early Career Faculty program are gratefully acknowledged.
Electrical Resistivity Measurement of Petroleum Coke Powder by Means of Four-Probe Method
NASA Astrophysics Data System (ADS)
Rouget, G.; Majidi, B.; Picard, D.; Gauvin, G.; Ziegler, D.; Mashreghi, J.; Alamdari, H.
2017-10-01
Carbon anodes used in Hall-Héroult electrolysis cells are involved in both electrical and chemical processes of the cell. Electrical resistivity of anodes depends on electrical properties of its constituents, of which carbon coke aggregates are the most prevalent. Electrical resistivity of coke aggregates is usually characterized according to the ISO 10143 standardized test method, which consists of measuring the voltage drop in the bed of particles between two electrically conducing plungers through which the current is also applied. Estimation of the electrical resistivity of coke particles from the resistivity of particle bed is a challenging task and needs consideration of the contribution of the interparticle void fraction and the particle/particle contact resistances. In this work, the bed resistivity was normalized by subtracting the interparticle void fraction. Then, the contact size was obtained from discrete element method simulation and the contact resistance was calculated using Holm's theory. Finally, the resistivity of the coke particles was obtained from the bed resistivity.
Attrition of fluid cracking catalyst in fluidized beds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boerefijn, R.; Ghadiri, M.
1996-12-31
Particle attrition in fluid catalytic cracking units causes loss of catalyst, which could amount to a few tonnes per day! The dependence of attrition on the process conditions and catalyst properties is therefore of great industrial interest, but it is however not well established at present. The process of attrition in the jetting area of fluidised beds is addressed and the attrition test method of Forsythe & Hertwig is analysed in this paper. This method is used commonly to assess the attrition propensity of FCC powder, whereby the attrition rate in a single jet at very high orifice velocity (300more » m s{sup -1}) is measured. There has been some concern on the relevance of this method to attrition in FCC units. Therefore, a previously-developed model of attrition in the jetting region is employed in an attempt to establish a solid basis of interpretation of the Forsythe & Hertwig test and its application as an industrial standard test. The model consists of two parts. The first part predicts the solids flow patterns in the jet region, simulating numerically the Forsythe & Hertwig test. The second part models the breakage of single particles upon impact. Combining these two models, thus linking single particle mechanical properties to macroscopic flow phenomena, results in the modelling of the attrition rate of particles entrained into a single high speed jet. High speed video recordings are made of a single jet in a two-dimensional fluidised bed, at up to 40500 frames per second, in order to quantify some of the model parameters. Digital analysis of the video images yields values for particle velocities and entrainment rates in the jet, which can be compared to model predictions. 15 refs., 8 figs.« less
Preliminary investigation of a water-based method for fast integrating mobility spectrometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spielman, Steven R.; Hering, Susanne V.; Kuang, Chongai
A water-based condensational growth channel was developed for imaging mobility-separated particles within a parallel plate separation channel of the Fast Integrated Mobility Spectrometer (FIMS). Reported are initial tests of that system, in which the alcohol condenser of the FIMS was replaced by a water-based condensational growth channel. Tests with monodispersed sodium chloride aerosol verify that the water-condensational growth maintained the laminar flow, while providing sufficient growth for particle imaging. Particle positions mapped onto particle mobility, in accordance with theoretical expectations. Particles ranging in size from 12 nm to 100 nm were counted with the same efficiency as with a butanol-based ultrafine particlemore » counter, once inlet and line losses were taken into account.« less
Preliminary investigation of a water-based method for fast integrating mobility spectrometry
Spielman, Steven R.; Hering, Susanne V.; Kuang, Chongai; ...
2017-06-06
A water-based condensational growth channel was developed for imaging mobility-separated particles within a parallel plate separation channel of the Fast Integrated Mobility Spectrometer (FIMS). Reported are initial tests of that system, in which the alcohol condenser of the FIMS was replaced by a water-based condensational growth channel. Tests with monodispersed sodium chloride aerosol verify that the water-condensational growth maintained the laminar flow, while providing sufficient growth for particle imaging. Particle positions mapped onto particle mobility, in accordance with theoretical expectations. Particles ranging in size from 12 nm to 100 nm were counted with the same efficiency as with a butanol-based ultrafine particlemore » counter, once inlet and line losses were taken into account.« less
NASA Astrophysics Data System (ADS)
Savin, Andrei V.; Smirnov, Petr G.
2018-05-01
Simulation of collisional dynamics of a large ensemble of monodisperse particles by the method of discrete elements is considered. Verle scheme is used for integration of the equations of motion. Non-conservativeness of the finite-difference scheme is discovered depending on the time step, which is equivalent to a pure-numerical energy source appearance in the process of collision. Compensation method for the source is proposed and tested.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quaglioni, S.; Beck, B. R.
The Monte Carlo All Particle Method generator and collision physics library features two models for allowing a particle to either up- or down-scatter due to collisions with material at finite temperature. The two models are presented and compared. Neutron interaction with matter through elastic collisions is used as testing case.
Cleaning of nanopillar templates for nanoparticle collection using PDMS
NASA Astrophysics Data System (ADS)
Merzsch, S.; Wasisto, H. S.; Waag, A.; Kirsch, I.; Uhde, E.; Salthammer, T.; Peiner, E.
2011-05-01
Nanoparticles are easily attracted by surfaces. This sticking behavior makes it difficult to clean contaminated samples. Some complex approaches have already shown efficiencies in the range of 90%. However, a simple and cost efficient method was still missing. A commonly used silicone for soft lithography, PDMS, is able to mold a given surface. This property was used to cover surface-bonded particles from all other sides. After hardening the PDMS, particles are still embedded. A separation of silicone and sample disjoins also the particles from the surface. After this procedure, samples are clean again. This method was first tested with carbon particles on Si surfaces and Si pillar samples with aspect ratios up to 10. Experiments were done using 2 inch wafers, which, however, is not a size limitation for this method.
A Maximum Entropy Method for Particle Filtering
NASA Astrophysics Data System (ADS)
Eyink, Gregory L.; Kim, Sangil
2006-06-01
Standard ensemble or particle filtering schemes do not properly represent states of low priori probability when the number of available samples is too small, as is often the case in practical applications. We introduce here a set of parametric resampling methods to solve this problem. Motivated by a general H-theorem for relative entropy, we construct parametric models for the filter distributions as maximum-entropy/minimum-information models consistent with moments of the particle ensemble. When the prior distributions are modeled as mixtures of Gaussians, our method naturally generalizes the ensemble Kalman filter to systems with highly non-Gaussian statistics. We apply the new particle filters presented here to two simple test cases: a one-dimensional diffusion process in a double-well potential and the three-dimensional chaotic dynamical system of Lorenz.
Characterization of third-body media particles and their effect on in vitro composite wear
Lawson, Nathaniel C.; Cakir, Deniz; Beck, Preston; Litaker, Mark S.; Burgess, John O.
2012-01-01
Objectives The purpose of this study was to compare four medium particles currently used for in vitro composite wear testing (glass and PMMA beads and millet and poppy seeds). Methods Particles were prepared as described in previous wear studies. Hardness of medium particles was measured with a nano-indentor, particle size was measured with a particle size analyzer, and the particle form was determined with light microscopy and image analysis software. Composite wear was measured using each type of medium and water in the Alabama wear testing device. Four dental composites were compared: a hybrid (Z100), flowable microhybrid (Estelite Flow Quick), micromatrix (Esthet-X), and nano-filled (Filtek Supreme Plus). The test ran for 100,000 cycles at 1.2Hz with 70N force by a steel antagonist. Volumetric wear was measured by non-contact profilometry. A two-way analysis of variance (ANOVA) and Tukey's test was used to compare both materials and media. Results Hardness values (GPa) of the particles are (glass, millet, PMMA, poppy respectively): 1.310(0.150), 0.279(.170), 0.279(0.095), and 0.226(0.146). Average particle sizes (μm) are (glass, millet, PMMA, poppy respectively): 88.35(8.24), 8.07(4.05), 28.95(8.74), and 14.08(7.20). Glass and PMMA beads were considerably more round than the seeds. During composite wear testing, glass was the only medium that produced more wear than the use of water alone. The rank ordering of the materials varied with each medium, however, the glass and PMMA bead medium allowed better discrimination between materials. Significance PMMA beads are a practical and relevant choice for composite wear testing because they demonstrate similar physical properties as seeds but reduce the variability of wear measurements. PMID:22578990
In Situ Solid Particle Generator
NASA Technical Reports Server (NTRS)
Agui, Juan H.; Vijayakumar, R.
2013-01-01
Particle seeding is a key diagnostic component of filter testing and flow imaging techniques. Typical particle generators rely on pressurized air or gas sources to propel the particles into the flow field. Other techniques involve liquid droplet atomizers. These conventional techniques have drawbacks that include challenging access to the flow field, flow and pressure disturbances to the investigated flow, and they are prohibitive in high-temperature, non-standard, extreme, and closed-system flow conditions and environments. In this concept, the particles are supplied directly within a flow environment. A particle sample cartridge containing the particles is positioned somewhere inside the flow field. The particles are ejected into the flow by mechanical brush/wiper feeding and sieving that takes place within the cartridge chamber. Some aspects of this concept are based on established material handling techniques, but they have not been used previously in the current configuration, in combination with flow seeding concepts, and in the current operational mode. Unlike other particle generation methods, this concept has control over the particle size range ejected, breaks up agglomerates, and is gravity-independent. This makes this device useful for testing in microgravity environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, H.T.; Bachalo, W.D.
1984-10-01
The feasibility of developing a particle-sizing instrument for in-situ measurements in industrial environments, based on the method of optical heterodyne or coherent detection, was investigated. The instrument, a coherent optical particle spectrometer, or COPS, is potentially capable of measuring several important particle parameters, such as particle size, number density, and speed, because of the versatility of the optical heterodyne method. Water droplets generated by an aerosol/particle generator were used to test the performance of the COPS. Study findings have shown that the optical setup of the COPS is extremely sensitive to even minute mechanical or acoustic vibrations. At the optimalmore » setup, the COPS performs satisfactorily and has more than adequate signal-to-noise even with a 0.5 mW He-Ne laser.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, X; Gao, H; Schuemann, J
2015-06-15
Purpose: The Monte Carlo (MC) method is a gold standard for dose calculation in radiotherapy. However, it is not a priori clear how many particles need to be simulated to achieve a given dose accuracy. Prior error estimate and stopping criterion are not well established for MC. This work aims to fill this gap. Methods: Due to the statistical nature of MC, our approach is based on one-sample t-test. We design the prior error estimate method based on the t-test, and then use this t-test based error estimate for developing a simulation stopping criterion. The three major components are asmore » follows.First, the source particles are randomized in energy, space and angle, so that the dose deposition from a particle to the voxel is independent and identically distributed (i.i.d.).Second, a sample under consideration in the t-test is the mean value of dose deposition to the voxel by sufficiently large number of source particles. Then according to central limit theorem, the sample as the mean value of i.i.d. variables is normally distributed with the expectation equal to the true deposited dose.Third, the t-test is performed with the null hypothesis that the difference between sample expectation (the same as true deposited dose) and on-the-fly calculated mean sample dose from MC is larger than a given error threshold, in addition to which users have the freedom to specify confidence probability and region of interest in the t-test based stopping criterion. Results: The method is validated for proton dose calculation. The difference between the MC Result based on the t-test prior error estimate and the statistical Result by repeating numerous MC simulations is within 1%. Conclusion: The t-test based prior error estimate and stopping criterion are developed for MC and validated for proton dose calculation. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
Low-pressure membrane integrity tests for drinking water treatment: A review.
Guo, H; Wyart, Y; Perot, J; Nauleau, F; Moulin, P
2010-01-01
Low-pressure membrane systems, including microfiltration (MF) and ultrafiltration (UF) membranes, are being increasingly used in drinking water treatments due to their high level of pathogen removal. However, the pathogen will pass through the membrane and contaminate the product if the membrane integrity is compromised. Therefore, an effective on-line integrity monitoring method for MF and UF membrane systems is essential to guarantee the regulatory requirements for pathogen removal. A lot of works on low-pressure membrane integrity tests have been conducted by many researchers. This paper provides a literature review about different low-pressure membrane integrity monitoring methods for the drinking water treatment, including direct methods (pressure-based tests, acoustic sensor test, liquid porosimetry, etc.) and indirect methods (particle counting, particle monitoring, turbidity monitoring, surrogate challenge tests). Additionally, some information about the operation of membrane integrity tests is presented here. It can be realized from this review that it remains urgent to develop an alternative on-line detection technique for a quick, accurate, simple, continuous and relatively inexpensive evaluation of low-pressure membrane integrity. To better satisfy regulatory requirements for drinking water treatments, the characteristic of this ideal membrane integrity test is proposed at the end of this paper.
3D magnetospheric parallel hybrid multi-grid method applied to planet–plasma interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leclercq, L., E-mail: ludivine.leclercq@latmos.ipsl.fr; Modolo, R., E-mail: ronan.modolo@latmos.ipsl.fr; Leblanc, F.
2016-03-15
We present a new method to exploit multiple refinement levels within a 3D parallel hybrid model, developed to study planet–plasma interactions. This model is based on the hybrid formalism: ions are kinetically treated whereas electrons are considered as a inertia-less fluid. Generally, ions are represented by numerical particles whose size equals the volume of the cells. Particles that leave a coarse grid subsequently entering a refined region are split into particles whose volume corresponds to the volume of the refined cells. The number of refined particles created from a coarse particle depends on the grid refinement rate. In order tomore » conserve velocity distribution functions and to avoid calculations of average velocities, particles are not coalesced. Moreover, to ensure the constancy of particles' shape function sizes, the hybrid method is adapted to allow refined particles to move within a coarse region. Another innovation of this approach is the method developed to compute grid moments at interfaces between two refinement levels. Indeed, the hybrid method is adapted to accurately account for the special grid structure at the interfaces, avoiding any overlapping grid considerations. Some fundamental test runs were performed to validate our approach (e.g. quiet plasma flow, Alfven wave propagation). Lastly, we also show a planetary application of the model, simulating the interaction between Jupiter's moon Ganymede and the Jovian plasma.« less
Importance sampling variance reduction for the Fokker–Planck rarefied gas particle method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collyer, B.S., E-mail: benjamin.collyer@gmail.com; London Mathematical Laboratory, 14 Buckingham Street, London WC2N 6DF; Connaughton, C.
The Fokker–Planck approximation to the Boltzmann equation, solved numerically by stochastic particle schemes, is used to provide estimates for rarefied gas flows. This paper presents a variance reduction technique for a stochastic particle method that is able to greatly reduce the uncertainty of the estimated flow fields when the characteristic speed of the flow is small in comparison to the thermal velocity of the gas. The method relies on importance sampling, requiring minimal changes to the basic stochastic particle scheme. We test the importance sampling scheme on a homogeneous relaxation, planar Couette flow and a lid-driven-cavity flow, and find thatmore » our method is able to greatly reduce the noise of estimated quantities. Significantly, we find that as the characteristic speed of the flow decreases, the variance of the noisy estimators becomes independent of the characteristic speed.« less
NASA Astrophysics Data System (ADS)
Capecelatro, Jesse
2018-03-01
It has long been suggested that a purely Lagrangian solution to global-scale atmospheric/oceanic flows can potentially outperform tradition Eulerian schemes. Meanwhile, a demonstration of a scalable and practical framework remains elusive. Motivated by recent progress in particle-based methods when applied to convection dominated flows, this work presents a fully Lagrangian method for solving the inviscid shallow water equations on a rotating sphere in a smooth particle hydrodynamics framework. To avoid singularities at the poles, the governing equations are solved in Cartesian coordinates, augmented with a Lagrange multiplier to ensure that fluid particles are constrained to the surface of the sphere. An underlying grid in spherical coordinates is used to facilitate efficient neighbor detection and parallelization. The method is applied to a suite of canonical test cases, and conservation, accuracy, and parallel performance are assessed.
Localization of a variational particle smoother
NASA Astrophysics Data System (ADS)
Morzfeld, M.; Hodyss, D.; Poterjoy, J.
2017-12-01
Given the success of 4D-variational methods (4D-Var) in numerical weather prediction,and recent efforts to merge ensemble Kalman filters with 4D-Var,we consider a method to merge particle methods and 4D-Var.This leads us to revisit variational particle smoothers (varPS).We study the collapse of varPS in high-dimensional problemsand show how it can be prevented by weight-localization.We test varPS on the Lorenz'96 model of dimensionsn=40, n=400, and n=2000.In our numerical experiments, weight localization prevents the collapse of the varPS,and we note that the varPS yields results comparable to ensemble formulations of 4D-variational methods,while it outperforms EnKF with tuned localization and inflation,and the localized standard particle filter.Additional numerical experiments suggest that using localized weights in varPS may not yield significant advantages over unweighted or linearizedsolutions in near-Gaussian problems.
Quantifying Particle Numbers and Mass Flux in Drifting Snow
NASA Astrophysics Data System (ADS)
Crivelli, Philip; Paterna, Enrico; Horender, Stefan; Lehning, Michael
2016-12-01
We compare two of the most common methods of quantifying mass flux, particle numbers and particle-size distribution for drifting snow events, the snow-particle counter (SPC), a laser-diode-based particle detector, and particle tracking velocimetry based on digital shadowgraphic imaging. The two methods were correlated for mass flux and particle number flux. For the SPC measurements, the device was calibrated by the manufacturer beforehand. The shadowgrapic imaging method measures particle size and velocity directly from consecutive images, and before each new test the image pixel length is newly calibrated. A calibration study with artificially scattered sand particles and glass beads provides suitable settings for the shadowgraphical imaging as well as obtaining a first correlation of the two methods in a controlled environment. In addition, using snow collected in trays during snowfall, several experiments were performed to observe drifting snow events in a cold wind tunnel. The results demonstrate a high correlation between the mass flux obtained for the calibration studies (r ≥slant 0.93) and good correlation for the drifting snow experiments (r ≥slant 0.81). The impact of measurement settings is discussed in order to reliably quantify particle numbers and mass flux in drifting snow. The study was designed and performed to optimize the settings of the digital shadowgraphic imaging system for both the acquisition and the processing of particles in a drifting snow event. Our results suggest that these optimal settings can be transferred to different imaging set-ups to investigate sediment transport processes.
Ice Particle Impact on Cloud Water Content Instrumentation
NASA Technical Reports Server (NTRS)
Emery, Edward F.; Miller, Dean R.; Plaskon, Stephen R.; Strapp, Walter; Lillie, Lyle
2004-01-01
Determining the total amount of water contained in an icing cloud necessitates the measurement of both the liquid droplets and ice particles. One commonly accepted method for measuring cloud water content utilizes a hot wire sensing element, which is maintained at a constant temperature. In this approach, the cloud water content is equated with the power required to keep the sense element at a constant temperature. This method inherently assumes that impinging cloud particles remain on the sensing element surface long enough to be evaporated. In the case of ice particles, this assumption requires that the particles do not bounce off the surface after impact. Recent tests aimed at characterizing ice particle impact on a thermally heated wing section, have raised questions about the validity of this assumption. Ice particles were observed to bounce off the heated wing section a very high percentage of the time. This result could have implications for Total Water Content sensors which are designed to capture ice particles, and thus do not account for bouncing or breakup of ice particles. Based on these results, a test was conducted to investigate ice particle impact on the sensing elements of the following hot-wire cloud water content probes: (1) Nevzorov Total Water Content (TWC)/Liquid Water Content (LWC) probe, (2) Science Engineering Associates TWC probe, and (3) Particle Measuring Systems King probe. Close-up video imaging was used to study ice particle impact on the sensing element of each probe. The measured water content from each probe was also determined for each cloud condition. This paper will present results from this investigation and attempt to evaluate the significance of ice particle impact on hot-wire cloud water content measurements.
Bissonnette, Luc; Maheux, Andrée F; Bergeron, Michel G
2017-01-01
The microbial assessment of potable/drinking water is done to ensure that the resource is free of fecal contamination indicators or waterborne pathogens. Culture-based methods for verifying the microbial safety are limited in the sense that a standard volume of water is generally tested for only one indicator (family) or pathogen.In this work, we describe a membrane filtration-based molecular microbiology method, CRENAME (Concentration Recovery Extraction of Nucleic Acids and Molecular Enrichment), exploiting molecular enrichment by whole genome amplification (WGA) to yield, in less than 4 h, a nucleic acid preparation which can be repetitively tested by real-time PCR for example, to provide multiparametric presence/absence tests (1 colony forming unit or microbial particle per standard volume of 100-1000 mL) for bacterial or protozoan parasite cells or particles susceptible to contaminate potable/drinking water.
An approach for automated analysis of particle holograms
NASA Technical Reports Server (NTRS)
Stanton, A. C.; Caulfield, H. J.; Stewart, G. W.
1984-01-01
A simple method for analyzing droplet holograms is proposed that is readily adaptable to automation using modern image digitizers and analyzers for determination of the number, location, and size distributions of spherical or nearly spherical droplets. The method determines these parameters by finding the spatial location of best focus of the droplet images. With this location known, the particle size may be determined by direct measurement of image area in the focal plane. Particle velocity and trajectory may be determined by comparison of image locations at different instants in time. The method is tested by analyzing digitized images from a reconstructed in-line hologram, and the results show that the method is more accurate than a time-consuming plane-by-plane search for sharpest focus.
Computational techniques for flows with finite-rate condensation
NASA Technical Reports Server (NTRS)
Candler, Graham V.
1993-01-01
A computational method to simulate the inviscid two-dimensional flow of a two-phase fluid was developed. This computational technique treats the gas phase and each of a prescribed number of particle sizes as separate fluids which are allowed to interact with one another. Thus, each particle-size class is allowed to move through the fluid at its own velocity at each point in the flow field. Mass, momentum, and energy are exchanged between each particle class and the gas phase. It is assumed that the particles do not collide with one another, so that there is no inter-particle exchange of momentum and energy. However, the particles are allowed to grow, and therefore, they may change from one size class to another. Appropriate rates of mass, momentum, and energy exchange between the gas and particle phases and between the different particle classes were developed. A numerical method was developed for use with this equation set. Several test cases were computed and show qualitative agreement with previous calculations.
Chen, Fanxiu; Zhuang, Qi; Zhang, Huixin
2016-06-20
The mechanical behaviors of granular materials are governed by the grain properties and microstructure of the materials. We conducted experiments to study the force transmission in granular materials using plane strain tests. The large amount of nearly continuous displacement data provided by the advanced noncontact experimental technique of digital image correlation (DIC) has provided a means to quantify local displacements and strains at the particle level. The average strain of each particle could be calculated based on the DIC method, and the average stress could be obtained using Hooke's law. The relationship between the stress and particle force could be obtained based on basic Newtonian mechanics and the balance of linear momentum at the particle level. This methodology is introduced and validated. In the testing procedure, the system is tested in real 2D particle cases, and the contact forces and force chain are obtained and analyzed. The system has great potential for analyzing a real granular system and measuring the contact forces and force chain.
Zheng, Zhongqing; Durbin, Thomas D; Xue, Jian; Johnson, Kent C; Li, Yang; Hu, Shaohua; Huai, Tao; Ayala, Alberto; Kittelson, David B; Jung, Heejung S
2014-01-01
It is important to understand the differences between emissions from standard laboratory testing cycles and those from actual on-road driving conditions, especially for solid particle number (SPN) emissions now being regulated in Europe. This study compared particle mass and SPN emissions from a heavy-duty diesel vehicle operating over the urban dynamometer driving schedule (UDDS) and actual on-road driving conditions. Particle mass emissions were calculated using the integrated particle size distribution (IPSD) method and called MIPSD. The MIPSD emissions for the UDDS and on-road tests were more than 6 times lower than the U.S. 2007 heavy-duty particulate matter (PM) mass standard. The MIPSD emissions for the UDDS fell between those for the on-road uphill and downhill driving. SPN and MIPSD measurements were dominated by nucleation particles for the UDDS and uphill driving and by accumulation mode particles for cruise and downhill driving. The SPN emissions were ∼ 3 times lower than the Euro 6 heavy-duty SPN limit for the UDDS and downhill driving and ∼ 4-5 times higher than the Euro 6 SPN limit for the more aggressive uphill driving; however, it is likely that most of the "solid" particles measured under these conditions were associated with a combination release of stored sulfates and enhanced sulfate formation associated with high exhaust temperatures, leading to growth of volatile particles into the solid particle counting range above 23 nm. Except for these conditions, a linear relationship was found between SPN and accumulation mode MIPSD. The coefficient of variation (COV) of SPN emissions of particles >23 nm ranged from 8 to 26% for the UDDS and on-road tests.
Structural properties of iron and nickel mixed oxide nano particles.
NASA Astrophysics Data System (ADS)
Dehipawala, Sunil; Samarasekara, Pubudu; Gafney, Harry
Small scale magnets have very high technological importance today. Instead of traditional expensive methods, scientists are exploring new low cost methods to produce micro magnets. We synthesized thin film magnets containing iron and nickel oxides. Films will be synthesized using sol-gel method and spin coating technique. Several different precursor concentrations were tested to find out the ideal concentrations for stable thin films. Structural properties of iron and nickel oxide particles were investigated using X-ray absorption and Mossbauer spectroscopy. PSC-CUNY.
NASA Astrophysics Data System (ADS)
van Gent, P. L.; Michaelis, D.; van Oudheusden, B. W.; Weiss, P.-É.; de Kat, R.; Laskari, A.; Jeon, Y. J.; David, L.; Schanz, D.; Huhn, F.; Gesemann, S.; Novara, M.; McPhaden, C.; Neeteson, N. J.; Rival, D. E.; Schneiders, J. F. G.; Schrijer, F. F. J.
2017-04-01
A test case for pressure field reconstruction from particle image velocimetry (PIV) and Lagrangian particle tracking (LPT) has been developed by constructing a simulated experiment from a zonal detached eddy simulation for an axisymmetric base flow at Mach 0.7. The test case comprises sequences of four subsequent particle images (representing multi-pulse data) as well as continuous time-resolved data which can realistically only be obtained for low-speed flows. Particle images were processed using tomographic PIV processing as well as the LPT algorithm `Shake-The-Box' (STB). Multiple pressure field reconstruction techniques have subsequently been applied to the PIV results (Eulerian approach, iterative least-square pseudo-tracking, Taylor's hypothesis approach, and instantaneous Vortex-in-Cell) and LPT results (FlowFit, Vortex-in-Cell-plus, Voronoi-based pressure evaluation, and iterative least-square pseudo-tracking). All methods were able to reconstruct the main features of the instantaneous pressure fields, including methods that reconstruct pressure from a single PIV velocity snapshot. Highly accurate reconstructed pressure fields could be obtained using LPT approaches in combination with more advanced techniques. In general, the use of longer series of time-resolved input data, when available, allows more accurate pressure field reconstruction. Noise in the input data typically reduces the accuracy of the reconstructed pressure fields, but none of the techniques proved to be critically sensitive to the amount of noise added in the present test case.
Gao, Pengfei; Jaques, Peter A; Hsiao, Ta-Chih; Shepherd, Angie; Eimer, Benjamin C; Yang, Mengshi; Miller, Adam; Gupta, Bhupender; Shaffer, Ronald
2011-01-01
Existing face mask and respirator test methods draw particles through materials under vacuum to measure particle penetration. However, these filtration-based methods may not simulate conditions under which protective clothing operates in the workplace, where airborne particles are primarily driven by wind and other factors instead of being limited to a downstream vacuum. This study was focused on the design and characterization of a method simulating typical wind-driven conditions for evaluating the performance of materials used in the construction of protective clothing. Ten nonwoven fabrics were selected, and physical properties including fiber diameter, fabric thickness, air permeability, porosity, pore volume, and pore size were determined. Each fabric was sealed flat across the wide opening of a cone-shaped penetration cell that was then housed in a recirculation aerosol wind tunnel. The flow rate naturally driven by wind through the fabric was measured, and the sampling flow rate of the Scanning Mobility Particle Sizer used to measure the downstream particle size distribution and concentrations was then adjusted to minimize filtration effects. Particle penetration levels were measured under different face velocities by the wind-driven method and compared with a filtration-based method using the TSI 3160 automated filter tester. The experimental results show that particle penetration increased with increasing face velocity, and penetration also increased with increasing particle size up to about 300 to 500 nm. Penetrations measured by the wind-driven method were lower than those obtained with the filtration method for most of the fabrics selected, and the relative penetration performances of the fabrics were very different due to the vastly different pore structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassan, T.A.
1992-12-01
The practical use of Pulsed Laser Velocimetry (PLV) requires the use of fast, reliable computer-based methods for tracking numerous particles suspended in a fluid flow. Two methods for performing tracking are presented. One method tracks a particle through multiple sequential images (minimum of four required) by prediction and verification of particle displacement and direction. The other method, requiring only two sequential images uses a dynamic, binary, spatial, cross-correlation technique. The algorithms are tested on computer-generated synthetic data and experimental data which was obtained with traditional PLV methods. This allowed error analysis and testing of the algorithms on real engineering flows.more » A novel method is proposed which eliminates tedious, undersirable, manual, operator assistance in removing erroneous vectors. This method uses an iterative process involving an interpolated field produced from the most reliable vectors. Methods are developed to allow fast analysis and presentation of sets of PLV image data. Experimental investigation of a two-phase, horizontal, stratified, flow regime was performed to determine the interface drag force, and correspondingly, the drag coefficient. A horizontal, stratified flow test facility using water and air was constructed to allow interface shear measurements with PLV techniques. The experimentally obtained local drag measurements were compared with theoretical results given by conventional interfacial drag theory. Close agreement was shown when local conditions near the interface were similar to space-averaged conditions. However, theory based on macroscopic, space-averaged flow behavior was shown to give incorrect results if the local gas velocity near the interface as unstable, transient, and dissimilar from the average gas velocity through the test facility.« less
Multiparticle imaging technique for two-phase fluid flows using pulsed laser speckle velocimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassan, T.A.
1992-12-01
The practical use of Pulsed Laser Velocimetry (PLV) requires the use of fast, reliable computer-based methods for tracking numerous particles suspended in a fluid flow. Two methods for performing tracking are presented. One method tracks a particle through multiple sequential images (minimum of four required) by prediction and verification of particle displacement and direction. The other method, requiring only two sequential images uses a dynamic, binary, spatial, cross-correlation technique. The algorithms are tested on computer-generated synthetic data and experimental data which was obtained with traditional PLV methods. This allowed error analysis and testing of the algorithms on real engineering flows.more » A novel method is proposed which eliminates tedious, undersirable, manual, operator assistance in removing erroneous vectors. This method uses an iterative process involving an interpolated field produced from the most reliable vectors. Methods are developed to allow fast analysis and presentation of sets of PLV image data. Experimental investigation of a two-phase, horizontal, stratified, flow regime was performed to determine the interface drag force, and correspondingly, the drag coefficient. A horizontal, stratified flow test facility using water and air was constructed to allow interface shear measurements with PLV techniques. The experimentally obtained local drag measurements were compared with theoretical results given by conventional interfacial drag theory. Close agreement was shown when local conditions near the interface were similar to space-averaged conditions. However, theory based on macroscopic, space-averaged flow behavior was shown to give incorrect results if the local gas velocity near the interface as unstable, transient, and dissimilar from the average gas velocity through the test facility.« less
Digestion of Crystalline Silicotitanate (CST)
DOE Office of Scientific and Technical Information (OSTI.GOV)
DARREL, WALKER
2004-11-04
Researchers tested methods for chemically dissolving crystalline silicotitanate (CST) as a substitute for mechanical grinding to reduce particle size before vitrification. Testing used the commercially available form of CST, UOP IONSIV(R) IE-911. Reduction of the particle size to a range similar to that of the glass frit used by the Defense Waste Processing Facility (DWPF) could reduce problems with coupling cesium ion exchange to the vitrification process. This study found that IONSIV(R) IE-911 dissolves completely using a combination of acid, hydrogen peroxide, and fluoride ion. Neutralization of the resulting acidic solution precipitates components of the IONSIV(R) IE-911. Digestion requires extremelymore » corrosive conditions. Also, large particles may reform during neutralization, and the initiation and rate of gas generation are unpredictable. Therefore, the method is not recommended as a substitute for mechanical grinding.« less
2015-04-01
monodisperse particles. ENPs in environmental samples will likely have much broader size distributions and thus FFF-ICP-MS was tested over a greater...Figure 6). Resolution is based on ICP-MS sensitivity, and will likely decrease as the difference in particle diameter decreases. Second, this...Alvarez. 2006. Antibacterial activity of fullerene water suspensions: Effects of preparation method and particle size. Environmental Science
Flow-controlled magnetic particle manipulation
Grate, Jay W [West Richland, WA; Bruckner-Lea, Cynthia J [Richland, WA; Holman, David A [Las Vegas, NV
2011-02-22
Inventive methods and apparatus are useful for collecting magnetic materials in one or more magnetic fields and resuspending the particles into a dispersion medium, and optionally repeating collection/resuspension one or more times in the same or a different medium, by controlling the direction and rate of fluid flow through a fluid flow path. The methods provide for contacting derivatized particles with test samples and reagents, removal of excess reagent, washing of magnetic material, and resuspension for analysis, among other uses. The methods are applicable to a wide variety of chemical and biological materials that are susceptible to magnetic labeling, including, for example, cells, viruses, oligonucleotides, proteins, hormones, receptor-ligand complexes, environmental contaminants and the like.
NASA Technical Reports Server (NTRS)
Himmel, R. P.
1975-01-01
Various hybrid processing steps, handling procedures, and materials are examined in an attempt to identify sources of contamination and to propose methods for the control of these contaminants. It is found that package sealing, assembly, and rework are especially susceptible to contamination. Moisture and loose particles are identified as the worst contaminants. The points at which contaminants are most likely to enter the hybrid package are also identified, and both general and specific methods for their detection and control are developed. In general, the most effective controls for contaminants are: clean working areas, visual inspection at each step of the process, and effective cleaning at critical process steps. Specific methods suggested include the detection of loose particles by a precap visual inspection, by preseal and post-seal electrical testing, and by a particle impact noise test. Moisture is best controlled by sealing all packages in a clean, dry, inert atmosphere after a thorough bake-out of all parts.
NASA Astrophysics Data System (ADS)
Sun, Dan; Garmory, Andrew; Page, Gary J.
2017-02-01
For flows where the particle number density is low and the Stokes number is relatively high, as found when sand or ice is ingested into aircraft gas turbine engines, streams of particles can cross each other's path or bounce from a solid surface without being influenced by inter-particle collisions. The aim of this work is to develop an Eulerian method to simulate these types of flow. To this end, a two-node quadrature-based moment method using 13 moments is proposed. In the proposed algorithm thirteen moments of particle velocity, including cross-moments of second order, are used to determine the weights and abscissas of the two nodes and to set up the association between the velocity components in each node. Previous Quadrature Method of Moments (QMOM) algorithms either use more than two nodes, leading to increased computational expense, or are shown here to give incorrect results under some circumstances. This method gives the computational efficiency advantages of only needing two particle phase velocity fields whilst ensuring that a correct combination of weights and abscissas is returned for any arbitrary combination of particle trajectories without the need for any further assumptions. Particle crossing and wall bouncing with arbitrary combinations of angles are demonstrated using the method in a two-dimensional scheme. The ability of the scheme to include the presence of drag from a carrier phase is also demonstrated, as is bouncing off surfaces with inelastic collisions. The method is also applied to the Taylor-Green vortex flow test case and is found to give results superior to the existing two-node QMOM method and is in good agreement with results from Lagrangian modelling of this case.
Park, Jae Hong; Yoon, Ki Young; Na, Hyungjoo; Kim, Yang Seon; Hwang, Jungho; Kim, Jongbaeg; Yoon, Young Hun
2011-09-01
We grew multi-walled carbon nanotubes (MWCNTs) on a glass fiber air filter using thermal chemical vapor deposition (CVD) after the filter was catalytically activated with a spark discharge. After the CNT deposition, filtration and antibacterial tests were performed with the filters. Potassium chloride (KCl) particles (<1 μm) were used as the test aerosol particles, and their number concentration was measured using a scanning mobility particle sizer. Antibacterial tests were performed using the colony counting method, and Escherichia coli (E. coli) was used as the test bacteria. The results showed that the CNT deposition increased the filtration efficiency of nano and submicron-sized particles, but did not increase the pressure drop across the filter. When a pristine glass fiber filter that had no CNTs was used, the particle filtration efficiencies at particle sizes under 30 nm and near 500 nm were 48.5% and 46.8%, respectively. However, the efficiencies increased to 64.3% and 60.2%, respectively, when the CNT-deposited filter was used. The reduction in the number of viable cells was determined by counting the colony forming units (CFU) of each test filter after contact with the cells. The pristine glass fiber filter was used as a control, and 83.7% of the E. coli were inactivated on the CNT-deposited filter. Copyright © 2011 Elsevier B.V. All rights reserved.
Gillen, Greg; Najarro, Marcela; Wight, Scott; Walker, Marlon; Verkouteren, Jennifer; Windsor, Eric; Barr, Tim; Staymates, Matthew; Urbas, Aaron
2015-01-01
A method has been developed to fabricate patterned arrays of micrometer-sized monodisperse solid particles of ammonium nitrate on hydrophobic silicon surfaces using inkjet printing. The method relies on dispensing one or more microdrops of a concentrated aqueous ammonium nitrate solution from a drop-on-demand (DOD) inkjet printer at specific locations on a silicon substrate rendered hydrophobic by a perfluorodecytrichlorosilane monolayer coating. The deposited liquid droplets form into the shape of a spherical shaped cap; during the evaporation process, a deposited liquid droplet maintains this geometry until it forms a solid micrometer sized particle. Arrays of solid particles are obtained by sequential translation of the printer stage. The use of DOD inkjet printing for fabrication of discrete particle arrays allows for precise control of particle characteristics (mass, diameter and height), as well as the particle number and spatial distribution on the substrate. The final mass of an individual particle is precisely determined by using gravimetric measurement of the average mass of solution ejected per microdrop. The primary application of this method is fabrication of test materials for the evaluation of spatially-resolved optical and mass spectrometry based sensors used for detecting particle residues of contraband materials, such as explosives or narcotics. PMID:26610515
Gillen, Greg; Najarro, Marcela; Wight, Scott; Walker, Marlon; Verkouteren, Jennifer; Windsor, Eric; Barr, Tim; Staymates, Matthew; Urbas, Aaron
2015-11-24
A method has been developed to fabricate patterned arrays of micrometer-sized monodisperse solid particles of ammonium nitrate on hydrophobic silicon surfaces using inkjet printing. The method relies on dispensing one or more microdrops of a concentrated aqueous ammonium nitrate solution from a drop-on-demand (DOD) inkjet printer at specific locations on a silicon substrate rendered hydrophobic by a perfluorodecytrichlorosilane monolayer coating. The deposited liquid droplets form into the shape of a spherical shaped cap; during the evaporation process, a deposited liquid droplet maintains this geometry until it forms a solid micrometer sized particle. Arrays of solid particles are obtained by sequential translation of the printer stage. The use of DOD inkjet printing for fabrication of discrete particle arrays allows for precise control of particle characteristics (mass, diameter and height), as well as the particle number and spatial distribution on the substrate. The final mass of an individual particle is precisely determined by using gravimetric measurement of the average mass of solution ejected per microdrop. The primary application of this method is fabrication of test materials for the evaluation of spatially-resolved optical and mass spectrometry based sensors used for detecting particle residues of contraband materials, such as explosives or narcotics.
Sumonsiri, P; Thongudomporn, U; Paphangkorakit, J
2018-04-27
The correlation between chewing and gastric function is best reflected when the same food type is used during both tests. We proposed frankfurter sausage as test food for masticatory performance as it can also be used in gastric emptying test. The suitability of frankfurter sausage to determine masticatory performance, however, has never been examined. To examine the correlations between the median particle size of frankfurter sausage and almonds (as standard test food) after different numbers of chewing cycles. Twenty-seven subjects performed masticatory performance tests by chewing 2 types of test foods, that is, a piece of almond or 5-g frankfurter sausage cubes placed in a sealed latex bag, for 5 and 15 chewing cycles. For each individual, right and left sides were tested separately. Chewed samples obtained from both sides were pooled. Median particle sizes were determined using a multiple sieving method. Spearman's rank correlation was used to examine any correlation between median particle sizes of the 2 test foods after 5 and 15 cycles. Median particle sizes after 5 and 15 cycles were 2.04 ± 0.87 and 0.95 ± 0.58 mm for almonds and 4.16 ± 0.19 and 3.73 ± 0.25 mm for frankfurter sausage, respectively. Significant correlations were observed between the median particle size of chewed frankfurter sausage after 15 cycles and that of chewed almonds after 5 and 15 cycles (r = .76, P < .01 and r = .52, P = .01, respectively). Frankfurter sausage chewed for 15 cycles may be suitable for the determination of masticatory performance in conjunction with gastric emptying test. © 2018 John Wiley & Sons Ltd.
The accurate representation of aerosols in climate models requires direct ambient measurement of the size- and composition-dependent particle production fluxes. Here, we present the design, testing, and analysis of data collected through the first instrument capable of measuring ...
Developmental efforts and experimental data are described that focused on quantifying the transfer of particles on a mass basis from indoor surfaces to human skin. Methods were developed that utilized a common fluorescein-tagged Arizona Test Dust (ATD) as a possible surrogate ...
Conservative bin-to-bin fractional collisions
NASA Astrophysics Data System (ADS)
Martin, Robert
2016-11-01
Particle methods such as direct simulation Monte Carlo (DSMC) and particle-in-cell (PIC) are commonly used to model rarefied kinetic flows for engineering applications because of their ability to efficiently capture non-equilibrium behavior. The primary drawback to these methods relates to the poor convergence properties due to the stochastic nature of the methods which typically rely heavily on high degrees of non-equilibrium and time averaging to compensate for poor signal to noise ratios. For standard implementations, each computational particle represents many physical particles which further exacerbate statistical noise problems for flow with large species density variation such as encountered in flow expansions and chemical reactions. The stochastic weighted particle method (SWPM) introduced by Rjasanow and Wagner overcome this difficulty by allowing the ratio of real to computational particles to vary on a per particle basis throughout the flow. The DSMC procedure must also be slightly modified to properly sample the Boltzmann collision integral accounting for the variable particle weights and to avoid the creation of additional particles with negative weight. In this work, the SWPM with necessary modification to incorporate the variable hard sphere (VHS) collision cross section model commonly used in engineering applications is first incorporated into an existing engineering code, the Thermophysics Universal Research Framework. The results and computational efficiency are compared to a few simple test cases using a standard validated implementation of the DSMC method along with the adapted SWPM/VHS collision using an octree based conservative phase space reconstruction. The SWPM method is then further extended to combine the collision and phase space reconstruction into a single step which avoids the need to create additional computational particles only to destroy them again during the particle merge. This is particularly helpful when oversampling the collision integral when compared to the standard DSMC method. However, it is found that the more frequent phase space reconstructions can cause added numerical thermalization with low particle per cell counts due to the coarseness of the octree used. However, the methods are expected to be of much greater utility in transient expansion flows and chemical reactions in the future.
Nikolakakis, I; Aragon, O B; Malamataris, S
1998-07-01
The purpose of this study was to compare some indicators of capsule-filling performance, as measured by tapped density under different conditions, and elucidate possible quantitative relationships between variation of capsule fill-weight (%CV) and gravitational and inter-particle forces (attractive or frictional) derived from measurements of particle size, true density, low compression and tensile strength. Five common pharmaceutical diluents (lactose, maize starch, talc, Emcocel and Avicel) were investigated and two capsule-filling methods (pouring powder and dosator nozzle) were employed. It was found that for the pouring-type method the appropriateness of Hausner's ratio (HR), Carr's compressibility index (CC%) and Kawakita's constant (alpha) as indicators of capsule fill-weight variation decreases in the order alpha > CC% > HR; the appropriateness of these indicators also decreases with increasing cylinder size and with impact velocity during tapping. For the dosator-type method the appropriateness of the indicators decreases in the order HR > CC% > alpha, the opposite of that for the pouring-type method; the appropriateness of the indicators increases with decreasing cylinder size and impact velocity. The relationship between %CV and the ratio of inter-particle attractive to gravitational forces calculated from measurements of particle size and true density (Fvdw/Wp) was more significant for the pouring-type capsule-filling method. For the dosator-type method a significant relationship (1% level) was found between %CV and the product of Fvdw/Wp and a function expressing the increase, with packing density (p(f)), in the ratio of frictional to attractive inter-particle forces derived from compression (P) and tensile-strength (T) testing, d(log(P/T))/d(p(f)). The value of tapped density in predictions of capsule-filling performance is affected by the testing conditions in a manner depending on the filling method applied. For the pouring-type method predictions can be based on the ratio of attractive (inter-particle) to gravitational forces, whereas for the dosator-type method the contribution of frictional and attractive forces should, because of packing density change, also be taken into account.
Yang, Chan; Xu, Bing; Zhang, Zhi-Qiang; Wang, Xin; Shi, Xin-Yuan; Fu, Jing; Qiao, Yan-Jiang
2016-10-01
Blending uniformity is essential to ensure the homogeneity of Chinese medicine formula particles within each batch. This study was based on the blending process of ebony spray dried powder and dextrin(the proportion of dextrin was 10%),in which the analysis of near infrared (NIR) diffuse reflectance spectra was collected from six different sampling points in combination with moving window F test method in order to assess the blending uniformity of the blending process.The method was validated by the changes of citric acid content determined by the HPLC. The results of moving window F test method showed that the ebony spray dried powder and dextrin was homogeneous during 200-300 r and was segregated during 300-400 r. An advantage of this method is that the threshold value is defined statistically, not empirically and thus does not suffer from threshold ambiguities in common with the moving block standard deviatiun (MBSD). And this method could be employed to monitor other blending process of Chinese medicine powders on line. Copyright© by the Chinese Pharmaceutical Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stephen Seong Lee
Fuel flow to individual burners is complicated and difficult to determine on coal fired boilers, since coal solids were transported in a gas suspension that is governed by the complex physics of two-phase flow. The objectives of the project were the measurements of suspended coal solids-flows in the simulated test conditions. Various extractive methods were performed manually and can give only a snapshot result of fuel distribution. In order to measure particle diameter & velocity, laser based phase-Doppler particle analyzer (PDPA) and particle image velocimetry (PIV) were carefully applied. Statistical methods were used to analyze particle characteristics to see whichmore » factors have significant effect. The transparent duct model was carefully designed and fabricated for the laser-based-instrumentation of solids-flow monitoring (LISM). The experiments were conducted with two different kinds of particles with four different particle diameters. The particle types were organic particles and saw dust particles with the diameter range of 75-150 micron, 150-250 micron, 250-355 micron and 355-425 micron. The densities of the particles were measured to see how the densities affected the test results. Also the experiment was conducted with humid particles and fog particles. To generate humid particles, the humidifier was used. A pipe was connected to the humidifier to lead the particle flow to the intersection of the laser beam. The test results of the particle diameter indicated that, the mean diameter of humid particles was between 6.1703 microns and 6.6947 microns when the humid particle flow was low. When the humid particle flow was high, the mean diameter was between 6.6728 microns and 7.1872 microns. The test results of the particle mean velocity indicated that the mean velocity was between 1.3394 m/sec and 1.4556 m/sec at low humid particle flow. When the humid particle flow was high, the mean velocity was between 1.5694 m/sec and 1.7856 m/sec. The Air Flow Module, TQ AF 17 and shell ondina oil were used to generate fog particles. After the oil was heated inside the fog generator, the blower was used to generate the fog. The fog flew along the pipe to the intersection of the laser beam. The mean diameter of the fog particles was 5.765 microns. Compared with the humid particle diameter, we observed that the mean diameter of the fog particles was smaller than the humid particles. The test results of particle mean velocity was about 3.76 m/sec. Compared with the mean velocity of the humid particles, we can observed the mean velocity of fog particles were greater than humid particles. The experiments were conducted with four different kinds of particles with five different particle diameters. The particle types were organic particles, coal particles, potato particles and wheat particles with the diameter range of 63-75 micron, less than 150 micron, 150-250 micron, 250-355 micron and 355-425 micron. To control the flow rate, the control gate of the particle dispensing hopper was adjusted to 1/16 open rate, 1/8 open rate and 1/4 open rate. The captured image range was 0 cm to 5 cm from the control gate, 5 cm to 10 cm from the control gate and 10 cm to 15 cm from the control gate. Some of these experiments were conducted under both open environment conditions and closed environment conditions. Thus these experiments had a total of five parameters which were type of particles, diameter of particles, flow rate, observation range, and environment conditions. The coal particles (diameter between 63 and 75 microns) tested under the closed environment condition had three factors that were considered as the affecting factors. They were open rate, observation range, and environment conditions. In this experiment, the interaction of open rate and observation range had a significant effect on the lower limit. On the upper limit, the open rate and environment conditions had a significant effect. In addition, the interaction of open rate and environment conditions had a significant effect. The coal particles tested (diameter between 63 and 75 microns) under open environment, two factors were that considered as the affecting factors. They were the open rate and observation ranges. In this experiment, there was no significant effect on the lower limit. On the upper limit, the observation range had a significant effect. In addition, the interaction of open rate and observation range had a significant effect for the source of variation with 95% of confidence based on analysis of variance (ANOVA) results.« less
NASA Astrophysics Data System (ADS)
Alizadeh Behjani, Mohammadreza; Hassanpour, Ali; Ghadiri, Mojtaba; Bayly, Andrew
2017-06-01
Segregation of granules is an undesired phenomenon in which particles in a mixture separate from each other based on the differences in their physical and chemical properties. It is, therefore, crucial to control the homogeneity of the system by applying appropriate techniques. This requires a fundamental understanding of the underlying mechanisms. In this study, the effect of particle shape and cohesion has been analysed. As a model system prone to segregation, a ternary mixture of particles representing the common ingredients of home washing powders, namely, spray dried detergent powders, tetraacetylethylenediamine, and enzyme placebo (as the minor ingredient) during heap formation is modelled numerically by the Discrete Element Method (DEM) with an aim to investigate the effect of cohesion/adhesion of the minor components on segregation quality. Non-spherical particle shapes are created in DEM using the clumped-sphere method based on their X-ray tomograms. Experimentally, inter particle adhesion is generated by coating the minor ingredient (enzyme placebo) with Polyethylene Glycol 400 (PEG 400). The JKR theory is used to model the cohesion/adhesion of coated enzyme placebo particles in the simulation. Tests are carried out experimentally and simulated numerically by mixing the placebo particles (uncoated and coated) with the other ingredients and pouring them in a test box. The simulation and experimental results are compared qualitatively and quantitatively. It is found that coating the minor ingredient in the mixture reduces segregation significantly while the change in flowability of the system is negligible.
Reactive multi-particle collision dynamics with reactive boundary conditions
NASA Astrophysics Data System (ADS)
Sayyidmousavi, Alireza; Rohlf, Katrin
2018-07-01
In the present study, an off-lattice particle-based method called the reactive multi-particle collision (RMPC) dynamics is extended to model reaction-diffusion systems with reactive boundary conditions in which the a priori diffusion coefficient of the particles needs to be maintained throughout the simulation. To this end, the authors have made use of the so-called bath particles whose purpose is only to ensure proper diffusion of the main particles in the system. In order to model partial adsorption by a reactive boundary in the RMPC, the probability of a particle being adsorbed, once it hits the boundary, is calculated by drawing an analogy between the RMPC and Brownian Dynamics. The main advantages of the RMPC compared to other molecular based methods are less computational cost as well as conservation of mass, energy and momentum in the collision and free streaming steps. The proposed approach is tested on three reaction-diffusion systems and very good agreement with the solutions to their corresponding partial differential equations is observed.
[Effect of stability and dissolution of realgar nano-particles using solid dispersion technology].
Guo, Teng; Shi, Feng; Yang, Gang; Feng, Nian-Ping
2013-09-01
To improve the stability and dissolution of realgar nano-particles by solid dispersion. Using polyethylene glycol 6000 and poloxamer-188 as carriers, the solid dispersions were prepare by melting method. XRD, microscopic inspection were used to determine the status of realgar nano-particles in solid dispersions. The content and stability test of As(2)0(3) were determined by DDC-Ag method. Hydride generation atomic absorption spectrometry was used to determine the content of Arsenic and investigated the in vitro dissolution behavior of solid dispersions. The results of XRD and microscopic inspection showed that realgar nano-particles in solid dispersions were amorphous. The dissolution amount and rate of Arsenic from realgar nano-particles of all solid dispersions were increased significantly, the reunion of realgar nano-particles and content of As(2)0(3) were reduced for the formation of solid dispersions. The solid dispersion of realgar nano-particles with poloxamer-188 as carriers could obviously improve stability, dissolution and solubility.
A deterministic Lagrangian particle separation-based method for advective-diffusion problems
NASA Astrophysics Data System (ADS)
Wong, Ken T. M.; Lee, Joseph H. W.; Choi, K. W.
2008-12-01
A simple and robust Lagrangian particle scheme is proposed to solve the advective-diffusion transport problem. The scheme is based on relative diffusion concepts and simulates diffusion by regulating particle separation. This new approach generates a deterministic result and requires far less number of particles than the random walk method. For the advection process, particles are simply moved according to their velocity. The general scheme is mass conservative and is free from numerical diffusion. It can be applied to a wide variety of advective-diffusion problems, but is particularly suited for ecological and water quality modelling when definition of particle attributes (e.g., cell status for modelling algal blooms or red tides) is a necessity. The basic derivation, numerical stability and practical implementation of the NEighborhood Separation Technique (NEST) are presented. The accuracy of the method is demonstrated through a series of test cases which embrace realistic features of coastal environmental transport problems. Two field application examples on the tidal flushing of a fish farm and the dynamics of vertically migrating marine algae are also presented.
40 CFR 799.6784 - TSCA water solubility: Column elution method; shake flask method.
Code of Federal Regulations, 2010 CFR
2010-07-01
... reaction quality should be used to apply the test substance to the carrier material. Double distilled water... this section. (i) With this apparatus, the microcolumn must be modified. A stopcock with 2-way action... particles invalidates the results, and the test should be repeated with improvements in the filtering action...
A Wideband Fast Multipole Method for the two-dimensional complex Helmholtz equation
NASA Astrophysics Data System (ADS)
Cho, Min Hyung; Cai, Wei
2010-12-01
A Wideband Fast Multipole Method (FMM) for the 2D Helmholtz equation is presented. It can evaluate the interactions between N particles governed by the fundamental solution of 2D complex Helmholtz equation in a fast manner for a wide range of complex wave number k, which was not easy with the original FMM due to the instability of the diagonalized conversion operator. This paper includes the description of theoretical backgrounds, the FMM algorithm, software structures, and some test runs. Program summaryProgram title: 2D-WFMM Catalogue identifier: AEHI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4636 No. of bytes in distributed program, including test data, etc.: 82 582 Distribution format: tar.gz Programming language: C Computer: Any Operating system: Any operating system with gcc version 4.2 or newer Has the code been vectorized or parallelized?: Multi-core processors with shared memory RAM: Depending on the number of particles N and the wave number k Classification: 4.8, 4.12 External routines: OpenMP ( http://openmp.org/wp/) Nature of problem: Evaluate interaction between N particles governed by the fundamental solution of 2D Helmholtz equation with complex k. Solution method: Multilevel Fast Multipole Algorithm in a hierarchical quad-tree structure with cutoff level which combines low frequency method and high frequency method. Running time: Depending on the number of particles N, wave number k, and number of cores in CPU. CPU time increases as N log N.
NASA Astrophysics Data System (ADS)
Goss, Natasha R.; Mladenov, Natalie; Seibold, Christine M.; Chowanski, Kurt; Seitz, Leslie; Wellemeyer, T. Barret; Williams, Mark W.
2013-12-01
Atmospheric wet and dry deposition are important sources of carbon for remote alpine lakes and soils. The carbon inputs from dry deposition in alpine National Atmospheric Deposition Program (NADP) collectors, including aeolian dust and biological material, are not well constrained due to difficulties in retaining particulate matter in the collectors. Here, we developed and tested a marble insert for dry deposition collection at the Niwot Ridge Long Term Ecological Research Station (NWT LTER) Soddie site (3345 m) between 24 May and 8 November 2011. We conducted laboratory tests of the insert's effect on particulate matter (PM) mass and non-purgeable organic carbon (DOC) and found that the insert did not significantly change either measurement. Thus, the insert may enable dry deposition collection of PM and DOC at NADP sites. We then developed a method for enumerating the collected wet and dry deposition with the Flow Cytometer and Microscope (FlowCAM), a dynamic-image particle analysis tool. The FlowCAM has the potential to establish morphology, which affects particle settling and retention, through particle diameter and aspect ratio. Particle images were used to track the abundance of pollen grains over time. Qualitative image examination revealed that most particles were biological in nature, such as intact algal cells and pollen. Dry deposition loading to the Soddie site as determined by FlowCAM measurements was highly variable, ranging from 100 to >230 g ha-1 d-1 in June-August 2011 and peaking in late June. No significant difference in diameter or aspect ratio was found between wet and dry deposition, suggesting fundamental similarities between those deposition types. Although FlowCAM statistics and identification of particle types proved insightful, our total-particle enumeration method had a high variance and underestimated the total number of particles when compared to imaging of relatively large volumes (60-125 mL) from a single sample. We recommend use of the FlowCAM, especially for subclasses of particles, but in light of uncertainty in particle counts, believe that it should be paired with traditional methods such as microscopy in this stage of the technique's development. Analysis of well-mixed samples produced lower variability than settling methods used for algae samples. Use of the marble inserts in the dry deposition collector in the NADP context is recommended, and the implications of various particle counting and identification methods are explored.
Number size distribution of particulate emissions of heavy-duty engines in real world test cycles
NASA Astrophysics Data System (ADS)
Lehmann, Urs; Mohr, Martin; Schweizer, Thomas; Rütter, Josef
Five in-service engines in heavy-duty trucks complying with Euro II emission standards were measured on a dynamic engine test bench at EMPA. The particulate matter (PM) emissions of these engines were investigated by number and mass measurements. The mass of the total PM was evaluated using the standard gravimetric measurement method, the total number concentration and the number size distribution were measured by a Condensation Particle Counter (lower particle size cut-off: 7 nm) and an Electrical Low Pressure Impactor (lower particle size: 32 nm), respectively. The transient test cycles used represent either driving behaviour on the road (real-world test cycles) or a type approval procedure. They are characterised by the cycle power, the average cycle power and by a parameter for the cycle dynamics. In addition, the particle number size distribution was determined at two steady-state operating modes of the engine using a Scanning Mobility Particle Sizer. For quality control, each measurement was repeated at least three times under controlled conditions. It was found that the number size distributions as well as the total number concentration of emitted particles could be measured with a good repeatability. Total number concentration was between 9×10 11 and 1×10 13 particles/s (3×10 13-7×10 14 p/kWh) and mass concentration was between 0.09 and 0.48 g/kWh. For all transient cycles, the number mean diameter of the distributions lay typically at about 120 nm for aerodynamic particle diameter and did not vary significantly. In general, the various particle measurement devices used reveal the same trends in particle emissions. We looked at the correlation between specific gravimetric mass emission (PM) and total particle number concentration. The correlation tends to be influenced more by the different engines than by the test cycles.
A Novel Method for Modeling Neumann and Robin Boundary Conditions in Smoothed Particle Hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, Emily M.; Tartakovsky, Alexandre M.; Amon, Cristina
2010-08-26
In this paper we present an improved method for handling Neumann or Robin boundary conditions in smoothed particle hydrodynamics. The Neumann and Robin boundary conditions are common to many physical problems (such as heat/mass transfer), and can prove challenging to model in volumetric modeling techniques such as smoothed particle hydrodynamics (SPH). A new SPH method for diffusion type equations subject to Neumann or Robin boundary conditions is proposed. The new method is based on the continuum surface force model [1] and allows an efficient implementation of the Neumann and Robin boundary conditions in the SPH method for geometrically complex boundaries.more » The paper discusses the details of the method and the criteria needed to apply the model. The model is used to simulate diffusion and surface reactions and its accuracy is demonstrated through test cases for boundary conditions describing different surface reactions.« less
Comparison of deterministic and stochastic methods for time-dependent Wigner simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Sihong, E-mail: sihong@math.pku.edu.cn; Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg
2015-11-01
Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution ofmore » a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.« less
Fictitious domain method for fully resolved reacting gas-solid flow simulation
NASA Astrophysics Data System (ADS)
Zhang, Longhui; Liu, Kai; You, Changfu
2015-10-01
Fully resolved simulation (FRS) for gas-solid multiphase flow considers solid objects as finite sized regions in flow fields and their behaviours are predicted by solving equations in both fluid and solid regions directly. Fixed mesh numerical methods, such as fictitious domain method, are preferred in solving FRS problems and have been widely researched. However, for reacting gas-solid flows no suitable fictitious domain numerical method has been developed. This work presents a new fictitious domain finite element method for FRS of reacting particulate flows. Low Mach number reacting flow governing equations are solved sequentially on a regular background mesh. Particles are immersed in the mesh and driven by their surface forces and torques integrated on immersed interfaces. Additional treatments on energy and surface reactions are developed. Several numerical test cases validated the method and a burning carbon particles array falling simulation proved the capability for solving moving reacting particle cluster problems.
Volkmann, Niels
2004-01-01
Reduced representation templates are used in a real-space pattern matching framework to facilitate automatic particle picking from electron micrographs. The procedure consists of five parts. First, reduced templates are constructed either from models or directly from the data. Second, a real-space pattern matching algorithm is applied using the reduced representations as templates. Third, peaks are selected from the resulting score map using peak-shape characteristics. Fourth, the surviving peaks are tested for distance constraints. Fifth, a correlation-based outlier screening is applied. Test applications to a data set of keyhole limpet hemocyanin particles indicate that the method is robust and reliable.
Smoothed dissipative particle dynamics with angular momentum conservation
NASA Astrophysics Data System (ADS)
Müller, Kathrin; Fedosov, Dmitry A.; Gompper, Gerhard
2015-01-01
Smoothed dissipative particle dynamics (SDPD) combines two popular mesoscopic techniques, the smoothed particle hydrodynamics and dissipative particle dynamics (DPD) methods, and can be considered as an improved dissipative particle dynamics approach. Despite several advantages of the SDPD method over the conventional DPD model, the original formulation of SDPD by Español and Revenga (2003) [9], lacks angular momentum conservation, leading to unphysical results for problems where the conservation of angular momentum is essential. To overcome this limitation, we extend the SDPD method by introducing a particle spin variable such that local and global angular momentum conservation is restored. The new SDPD formulation (SDPD+a) is directly derived from the Navier-Stokes equation for fluids with spin, while thermal fluctuations are incorporated similarly to the DPD method. We test the new SDPD method and demonstrate that it properly reproduces fluid transport coefficients. Also, SDPD with angular momentum conservation is validated using two problems: (i) the Taylor-Couette flow with two immiscible fluids and (ii) a tank-treading vesicle in shear flow with a viscosity contrast between inner and outer fluids. For both problems, the new SDPD method leads to simulation predictions in agreement with the corresponding analytical theories, while the original SDPD method fails to capture properly physical characteristics of the systems due to violation of angular momentum conservation. In conclusion, the extended SDPD method with angular momentum conservation provides a new approach to tackle fluid problems such as multiphase flows and vesicle/cell suspensions, where the conservation of angular momentum is essential.
Reinders, Jörn; Sonntag, Robert; Kretzer, Jan Philippe
2014-11-01
Polyethylene wear (PE) is known to be a limiting factor in total joint replacements. However, a standardized wear test (e.g. ISO standard) can only replicate the complex in vivo loading condition in a simplified form. In this study, two different parameters were analyzed: (a) Bovine serum, as a substitute for synovial fluid, is typically replaced every 500,000 cycles. However, a continuous regeneration takes place in vivo. How does serum-replacement interval affect the wear rate of total knee replacements? (b) Patients with an artificial joint show reduced gait frequencies compared to standardized testing. What is the influence of a reduced frequency? Three knee wear tests were run: (a) reference test (ISO), (b) testing with a shortened lubricant replacement interval, (c) testing with reduced frequency. The wear behavior was determined based on gravimetric measurements and wear particle analysis. The results showed that the reduced test frequency only had a small effect on wear behavior. Testing with 1 Hz frequency is therefore a valid method for wear testing. However, testing with a shortened replacement interval nearly doubled the wear rate. Wear particle analysis revealed only small differences in wear particle size between the different tests. Wear particles were not linearly released within one replacement interval. The ISO standard should be revised to address the marked effects of lubricant replacement interval on wear rate.
Explosive particle soil surface dispersion model for detonated military munitions.
Hathaway, John E; Rishel, Jeremy P; Walsh, Marianne E; Walsh, Michael R; Taylor, Susan
2015-07-01
The accumulation of high explosive mass residue from the detonation of military munitions on training ranges is of environmental concern because of its potential to contaminate the soil, surface water, and groundwater. The US Department of Defense wants to quantify, understand, and remediate high explosive mass residue loadings that might be observed on active firing ranges. Previously, efforts using various sampling methods and techniques have resulted in limited success, due in part to the complicated dispersion pattern of the explosive particle residues upon detonation. In our efforts to simulate particle dispersal for high- and low-order explosions on hypothetical firing ranges, we use experimental particle data from detonations of munitions from a 155-mm howitzer, which are common military munitions. The mass loadings resulting from these simulations provide a previously unattained level of detail to quantify the explosive residue source-term for use in soil and water transport models. In addition, the resulting particle placements can be used to test, validate, and optimize particle sampling methods and statistical models as applied to firing ranges. Although the presented results are for a hypothetical 155-mm howitzer firing range, the method can be used for other munition types once the explosive particle characteristics are known.
Calculation of four-particle harmonic-oscillator transformation brackets
NASA Astrophysics Data System (ADS)
Germanas, D.; Kalinauskas, R. K.; Mickevičius, S.
2010-02-01
A procedure for precise calculation of the three- and four-particle harmonic-oscillator (HO) transformation brackets is presented. The analytical expressions of the four-particle HO transformation brackets are given. The computer code for the calculations of HO transformation brackets proves to be quick, efficient and produces results with small numerical uncertainties. Program summaryProgram title: HOTB Catalogue identifier: AEFQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1247 No. of bytes in distributed program, including test data, etc.: 6659 Distribution format: tar.gz Programming language: FORTRAN 90 Computer: Any computer with FORTRAN 90 compiler Operating system: Windows, Linux, FreeBSD, True64 Unix RAM: 8 MB Classification: 17.17 Nature of problem: Calculation of the three-particle and four-particle harmonic-oscillator transformation brackets. Solution method: The method is based on compact expressions of the three-particle harmonics oscillator brackets, presented in [1] and expressions of the four-particle harmonics oscillator brackets, presented in this paper. Restrictions: The three- and four-particle harmonic-oscillator transformation brackets up to the e=28. Unusual features: Possibility of calculating the four-particle harmonic-oscillator transformation brackets. Running time: Less than one second for the single harmonic-oscillator transformation bracket. References:G.P. Kamuntavičius, R.K. Kalinauskas, B.R. Barret, S. Mickevičius, D. Germanas, Nuclear Physics A 695 (2001) 191.
Laser as a Tool to Study Radiation Effects in CMOS
NASA Astrophysics Data System (ADS)
Ajdari, Bahar
Energetic particles from cosmic ray or terrestrial sources can strike sensitive areas of CMOS devices and cause soft errors. Understanding the effects of such interactions is crucial as the device technology advances, and chip reliability has become more important than ever. Particle accelerator testing has been the standard method to characterize the sensitivity of chips to single event upsets (SEUs). However, because of their costs and availability limitations, other techniques have been explored. Pulsed laser has been a successful tool for characterization of SEU behavior, but to this day, laser has not been recognized as a comparable method to beam testing. In this thesis, I propose a methodology of correlating laser soft error rate (SER) to particle beam gathered data. Additionally, results are presented showing a temperature dependence of SER and the "neighbor effect" phenomenon where due to the close proximity of devices a "weakening effect" in the ON state can be observed.
An Investigation into Solution Verification for CFD-DEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fullmer, William D.; Musser, Jordan
This report presents the study of the convergence behavior of the computational fluid dynamicsdiscrete element method (CFD-DEM) method, specifically National Energy Technology Laboratory’s (NETL) open source MFiX code (MFiX-DEM) with a diffusion based particle-tocontinuum filtering scheme. In particular, this study focused on determining if the numerical method had a solution in the high-resolution limit where the grid size is smaller than the particle size. To address this uncertainty, fixed particle beds of two primary configurations were studied: i) fictitious beds where the particles are seeded with a random particle generator, and ii) instantaneous snapshots from a transient simulation of anmore » experimentally relevant problem. Both problems considered a uniform inlet boundary and a pressure outflow. The CFD grid was refined from a few particle diameters down to 1/6 th of a particle diameter. The pressure drop between two vertical elevations, averaged across the bed cross-section was considered as the system response quantity of interest. A least-squares regression method was used to extrapolate the grid-dependent results to an approximate “grid-free” solution in the limit of infinite resolution. The results show that the diffusion based scheme does yield a converging solution. However, the convergence is more complicated than encountered in simpler, single-phase flow problems showing strong oscillations and, at times, oscillations superimposed on top of globally non-monotonic behavior. The challenging convergence behavior highlights the importance of using at least four grid resolutions in solution verification problems so that (over-determined) regression-based extrapolation methods may be applied to approximate the grid-free solution. The grid-free solution is very important in solution verification and VVUQ exercise in general as the difference between it and the reference solution largely determines the numerical uncertainty. By testing different randomized particle configurations of the same general problem (for the fictitious case) or different instances of freezing a transient simulation, the numerical uncertainties appeared to be on the same order of magnitude as ensemble or time averaging uncertainties. By testing different drag laws, almost all cases studied show that model form uncertainty in this one, very important closure relation was larger than the numerical uncertainty, at least with a reasonable CFD grid, roughly five particle diameters. In this study, the diffusion width (filtering length scale) was mostly set at a constant of six particle diameters. A few exploratory tests were performed to show that similar convergence behavior was observed for diffusion widths greater than approximately two particle diameters. However, this subject was not investigated in great detail because determining an appropriate filter size is really a validation question which must be determined by comparison to experimental or highly accurate numerical data. Future studies are being considered targeting solution verification of transient simulations as well as validation of the filter size with direct numerical simulation data.« less
NASA Astrophysics Data System (ADS)
Wang, Jing; Tronville, Paolo
2014-06-01
The filtration of airborne nanoparticles is an important control technique as the environmental, health, and safety impacts of nanomaterials grow. A review of the literature shows that significant progress has been made on airborne nanoparticle filtration in the academic field in the recent years. We summarize the filtration mechanisms of fibrous and membrane filters; the air flow resistance and filter media figure of merit are discussed. Our review focuses on the air filtration test methods and instrumentation necessary to implement them; recent experimental studies are summarized accordingly. Two methods using monodisperse and polydisperse challenging aerosols, respectively, are discussed in detail. Our survey shows that the commercial instruments are already available for generating a large amount of nanoparticles, sizing, and quantifying them accurately. The commercial self-contained filter test systems provide the possibility of measurement for particles down to 15 nm. Current international standards dealing with efficiency test for filters and filter media focus on measurement of the minimum efficiency at the most penetrating particle size. The available knowledge and instruments provide a solid base for development of test methods to determine the effectiveness of filtration media against airborne nanoparticles down to single-digit nanometer range.
Particle or particulate matter is defined as any finely divided solid or liquid material, other than uncombined water, emitted to the ambient air as measured by applicable reference methods, or an equivalent or alternative method, or by a test method specified in 40CFR50.
MULTIPOLLUTANT METHODS - METHODS FOR OZONE AND OZONE PRECURSORS
This task involves the development and testing of methods for monitoring ozone and compounds associated with the atmospheric chemistry of ozone production both as precursors and reaction products. Although atmospheric gases are the primary interest, separation of gas and particl...
NASA Astrophysics Data System (ADS)
Tiecheng, Yan; Xingyuan, Zhang; Hongping, Yang
2018-03-01
This study describes an analytical comparison of the engineering characteristics of two-lime waste tire particle soil and soil with lime/loess ratio of 3:7 using density measurements, results of indoor consolidation tests, and direct shear tests to examine the strength and deformation characteristics. It investigates the engineering performance of collapsible loess treated with waste tire particles and lime. The results indicate that (1) the shear strength of the two-lime waste tire particle soils increases continuously with soil age; and (2) the two-lime waste tire particle soils are light-weight, strong, and low-deformation soils, and can be applied primarily to improve the foundation soil conditions in areas with collapsible loess soils. This could address the problem of used tire disposal, while providing a new method to consider and manage collapsible loess soils.
Iqbal, Zafar; Alsudir, Samar; Miah, Musharraf; Lai, Edward P C
2011-08-01
Hazardous compounds and bacteria in water have an adverse impact on human health and environmental ecology. Polydopamine (or polypyrrole)-coated magnetic nanoparticles and polymethacrylic acid-co-ethylene glycol dimethacrylate submicron particles were investigated for their fast binding kinetics with bisphenol A, proflavine, naphthalene acetic acid, and Escherichia coli. A new method was developed for the rapid determination of % binding by sequential injection of particles first and compounds (or E. coli) next into a fused-silica capillary for overlap binding during electrophoretic migration. Only nanolitre volumes of compounds and particles were sufficient to complete a rapid binding test. After heterogeneous binding, separation of the compounds from the particles was afforded by capillary electrophoresis. % binding was influenced by applied voltage but not current flow. In-capillary coating of particles affected the % binding of compounds. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Zhu, Yanan; Ouyang, Qi; Mao, Youdong
2017-07-21
Single-particle cryo-electron microscopy (cryo-EM) has become a mainstream tool for the structural determination of biological macromolecular complexes. However, high-resolution cryo-EM reconstruction often requires hundreds of thousands of single-particle images. Particle extraction from experimental micrographs thus can be laborious and presents a major practical bottleneck in cryo-EM structural determination. Existing computational methods for particle picking often use low-resolution templates for particle matching, making them susceptible to reference-dependent bias. It is critical to develop a highly efficient template-free method for the automatic recognition of particle images from cryo-EM micrographs. We developed a deep learning-based algorithmic framework, DeepEM, for single-particle recognition from noisy cryo-EM micrographs, enabling automated particle picking, selection and verification in an integrated fashion. The kernel of DeepEM is built upon a convolutional neural network (CNN) composed of eight layers, which can be recursively trained to be highly "knowledgeable". Our approach exhibits an improved performance and accuracy when tested on the standard KLH dataset. Application of DeepEM to several challenging experimental cryo-EM datasets demonstrated its ability to avoid the selection of un-wanted particles and non-particles even when true particles contain fewer features. The DeepEM methodology, derived from a deep CNN, allows automated particle extraction from raw cryo-EM micrographs in the absence of a template. It demonstrates an improved performance, objectivity and accuracy. Application of this novel method is expected to free the labor involved in single-particle verification, significantly improving the efficiency of cryo-EM data processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Francois, Elizabeth Green; Morris, John S; Novak, Alan M
2010-01-01
Recent dynamic testing of Diaminoazoxyfurazan (DAAF) has focused on understanding the material properties affecting the detonation propagation, spreading, behavior and symmetry. Small scale gap testing and wedge testing focus on the sensitivity to shock with the gap test including the effects of particle size and density. Floret testing investigates the detonation spreading as it is affected by particle size, density, and binder content. The polyrho testing illustrates the effects of density and binder content on the detonation velocity. Finally the detonation spreading effect can be most dramatically seen in the Mushroom and Onionskin tests where the variations due to densitymore » gradients, pressing methods and geometry can be seen on the wave breakout behavior.« less
Evaluation of Antibacterial Effects of Silver-Coated Stainless Steel Orthodontic Brackets
Arash, Valiollah; Keikhaee, Fatemeh; Rajabnia, Ramazan; Khafri, Soraya; Tavanafar, Saeid
2016-01-01
Objectives: White spots and enamel demineralization around orthodontic brackets are among the most important complications resulting from orthodontic treatments. Since the antibacterial properties of metals and metallic particles have been well documented, the aim of this study was to assess the antibacterial effect of stainless steel orthodontic brackets coated with silver (Ag) particles. Materials and Methods: In this study, 40 standard metal brackets were divided into two groups of 20 cases and 20 controls. The brackets in the case group were coated with Ag particles using an electroplating method. Atomic force microscopy and scanning electron microscopy were used to assess the adequacy of the coating process. In addition, antibacterial tests, i.e., disk diffusion and direct contact tests were performed at three, six, 24, and 48 hours, and 15 and 30 days using a Streptococcus mutans strain. The results were analyzed using Student’s t-test and repeated measures ANOVA. Results: Analyses via SEM and AFM confirmed that excellent coatings were obtained by using an electroplating method. The groups exhibited similar behavior when subjected to the disk diffusion test in the agar medium. However, the bacterial counts of the Ag-coated brackets were, in general, significantly lower (P<0.001) than those of their non-coated counterparts. Conclusions: Brackets coated with Ag, via an electroplating method, exhibited antibacterial properties when placed in direct contact with Streptococcus mutans. This antibacterial effect persisted for 30 days after contact with the bacteria. PMID:27536328
Genetic particle swarm parallel algorithm analysis of optimization arrangement on mistuned blades
NASA Astrophysics Data System (ADS)
Zhao, Tianyu; Yuan, Huiqun; Yang, Wenjun; Sun, Huagang
2017-12-01
This article introduces a method of mistuned parameter identification which consists of static frequency testing of blades, dichotomy and finite element analysis. A lumped parameter model of an engine bladed-disc system is then set up. A bladed arrangement optimization method, namely the genetic particle swarm optimization algorithm, is presented. It consists of a discrete particle swarm optimization and a genetic algorithm. From this, the local and global search ability is introduced. CUDA-based co-evolution particle swarm optimization, using a graphics processing unit, is presented and its performance is analysed. The results show that using optimization results can reduce the amplitude and localization of the forced vibration response of a bladed-disc system, while optimization based on the CUDA framework can improve the computing speed. This method could provide support for engineering applications in terms of effectiveness and efficiency.
Effectiveness of Cool Roof Coatings with Ceramic Particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brehob, Ellen G; Desjarlais, Andre Omer; Atchley, Jerald Allen
2011-01-01
Liquid applied coatings promoted as cool roof coatings, including several with ceramic particles, were tested at Oak Ridge National Laboratory (ORNL), Oak Ridge, Tenn., for the purpose of quantifying their thermal performances. Solar reflectance measurements were made for new samples and aged samples using a portable reflectometer (ASTM C1549, Standard Test Method for Determination of Solar Reflectance Near Ambient Temperature Using a Portable Solar Reflectometer) and for new samples using the integrating spheres method (ASTM E903, Standard Test Method for Solar Absorptance, Reflectance, and Transmittance of Materials Using Integrating Spheres). Thermal emittance was measured for the new samples using amore » portable emissometer (ASTM C1371, Standard Test Method for Determination of Emittance of Materials Near Room 1 Proceedings of the 2011 International Roofing Symposium Temperature Using Portable Emissometers). Thermal conductivity of the coatings was measured using a FOX 304 heat flow meter (ASTM C518, Standard Test Method for Steady-State Thermal Transmission Properties by Means of the Heat Flow Meter Apparatus). The surface properties of the cool roof coatings had higher solar reflectance than the reference black and white material, but there were no significant differences among coatings with and without ceramics. The coatings were applied to EPDM (ethylene propylene diene monomer) membranes and installed on the Roof Thermal Research Apparatus (RTRA), an instrumented facility at ORNL for testing roofs. Roof temperatures and heat flux through the roof were obtained for a year of exposure in east Tennessee. The field tests showed significant reduction in cooling required compared with the black reference roof (~80 percent) and a modest reduction in cooling compared with the white reference roof (~33 percent). The coating material with the highest solar reflectivity (no ceramic particles) demonstrated the best overall thermal performance (combination of reducing the cooling load cost and not incurring a large heating penalty cost) and suggests solar reflectivity is the significant characteristic for selecting cool roof coatings.« less
40 CFR 53.42 - Generation of test atmospheres for wind tunnel tests.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Generation of test atmospheres for wind... Testing Performance Characteristics of Methods for PM10 § 53.42 Generation of test atmospheres for wind... particle delivery system shall consist of a blower system and a wind tunnel having a test section of...
40 CFR 53.42 - Generation of test atmospheres for wind tunnel tests.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Generation of test atmospheres for wind... Testing Performance Characteristics of Methods for PM10 § 53.42 Generation of test atmospheres for wind... particle delivery system shall consist of a blower system and a wind tunnel having a test section of...
Erosion tests of materials by energetic particle beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schechter, D.E.; Tsai, C.C.; Sluss, F.
1985-01-01
The internal components of magnetic fusion devices must withstand erosion from and high heat flux of energetic plasma particles. The selection of materials for the construction of these components is important to minimize contamination of the plasma. In order to study various materials' comparative resistance to erosion by energetic particles and their ability to withstand high heat flux, water-cooled copper swirl tubes coated or armored with various materials were subjected to bombardment by hydrogen and helium particle beams. Materials tested were graphite, titanium carbide (TiC), chromium, nickel, copper, silver, gold, and aluminum. Details of the experimental arrangement and methods ofmore » application or attachment of the materials to the copper swirl tubes are presented. Results including survivability and mass losses are discussed.« less
Series cell light extinction monitor
Novick, Vincent J.
1990-01-01
A method and apparatus for using the light extinction measurements from two or more light cells positioned along a gasflow chamber in which the gas volumetric rate is known to determine particle number concentration and mass concentration of an aerosol independent of extinction coefficient and to determine estimates for particle size and mass concentrations. The invention is independent of particle size. This invention has application to measurements made during a severe nuclear reactor fuel damage test.
Space radiation test model study. Report for 20 May 1985-20 February 1986
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nightingale, R.W.; Chiu, Y.T.; Davidson, G.T.
1986-03-14
Dynamic models of the energetic populations in the outer radiation belts are being developed to better understand the extreme variations of particle flux in response to magnetospheric and solar activity. The study utilizes the SCATHA SC3 high-energy electron data, covering energies from 47 keV to 5 MeV with fine pitch-angle measurements (3 deg field of view) over the L-shell range of 5.3 to 8.7. Butter-fly distributions in the dusk sector signify particle losses due to L shell splitting of the particle-drift orbits and the subsequent scattering of the particles from the orbits by the magnetopause. To model the temporal variationsmore » and diffusion procsses of the particle populations, the data were organized into phase-space distributions, binned according to altitude (L shell), energy, pitch angle, and time. These distributions can then be mapped to the equator and plotted for fixed first and second adiabatic invariants of the inherent particle motion. A new and efficient method for calculating the third adiabatic invariant using a line integral of the relevant magnetic potential at the particle mirror points has been developed and is undergoing testing. This method will provide a useful means of displaying the radial diffusion signatures of the outer radiation belts during the more-active periods when the L shell parameter is not a good concept due to severe drift-shell splitting. The first phase of fitting the energetic-electron phase-space distributions with a combined radial and pitch-angle diffusion formulation is well underway. Bessel functions are being fit to the data in an eigenmode expansion method to determine the diffusion coefficients.« less
Mizumoto, Takao; Tamura, Tetsuya; Kawai, Hitoshi; Kajiyama, Atsushi; Itai, Shigeru
2008-04-01
In this study, the taste-masking of famotidine, which could apply to any fast-disintegrating tablet, was investigated using the spray-dry method. The target characteristics of taste-masked particles were set as follows: the dissolution rate is not to be more than 30% at 1 min and not less than 85% at 15 min, and the particle size is not to be more than 150 microm in diameter to avoid a gritty feeling in the mouth. The target dissolution profiles of spray-dried particles consisting of Aquacoat ECD30 and Eudragit NE30D or triacetin was accomplished by the screening of formulas and the appropriate lab-scale manufacturing conditions. Lab-scale testing produced taste-masked particles that met the formulation targets. On the pilot scale, spray-dried particles with attributes, such as dissolution rate and particle size, of the same quality were produced, and reproducibility was also confirmed. This confirmed that the spray-dry method produced the most appropriate taste-masked particles for fast-disintegrating dosage forms.
NASA Astrophysics Data System (ADS)
Zhu, B.; Lin, J.; Yuan, X.; Li, Y.; Shen, C.
2016-12-01
The role of turbulent acceleration and heating in the fractal magnetic reconnection of solar flares is still not clear, especially at the X-point in the diffusion region. At virtual test aspect, it is hardly to quantitatively analyze the vortex generation, turbulence evolution, particle acceleration and heating in the magnetic islands coalesce in fractal manner, formatting into largest plasmid and ejection process in diffusion region through classical magnetohydrodynamics numerical method. With the development of physical particle numerical method (particle in cell method [PIC], Lattice Boltzmann method [LBM]) and high performance computing technology in recently two decades. Kinetic simulation has developed into an effectively manner to exploring the role of magnetic field and electric field turbulence in charged particles acceleration and heating process, since all the physical aspects relating to turbulent reconnection are taken into account. In this paper, the LBM based lattice DxQy grid and extended distribution are added into charged-particles-to-grid-interpolation of PIC based finite difference time domain scheme and Yee Grid, the hybrid PIC-LBM simulation tool is developed to investigating turbulence acceleration on TIANHE-2. The actual solar coronal condition (L≈105Km,B≈50-500G,T≈5×106K, n≈108-109, mi/me≈500-1836) is applied to study the turbulent acceleration and heating in solar flare fractal current sheet. At stage I, magnetic islands shrink due to magnetic tension forces, the process of island shrinking halts when the kinetic energy of the accelerated particles is sufficient to halt the further collapse due to magnetic tension forces, the particle energy gain is naturally a large fraction of the released magnetic energy. At stage II and III, the particles from the energized group come in to the center of the diffusion region and stay longer in the area. In contract, the particles from non energized group only skim the outer part of the diffusion regions. At stage IV, the magnetic reconnection type nanoplasmid (200km) stop expanding and carrying enough energy to eject particles as constant velocity. Last, the role of magnetic field turbulence and electric field turbulence in electron and ion acceleration at the diffusion regions in solar flare fractural current sheet is given.
A Binary Segmentation Approach for Boxing Ribosome Particles in Cryo EM Micrographs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adiga, Umesh P.S.; Malladi, Ravi; Baxter, William
Three-dimensional reconstruction of ribosome particles from electron micrographs requires selection of many single-particle images. Roughly 100,000 particles are required to achieve approximately 10 angstrom resolution. Manual selection of particles, by visual observation of the micrographs on a computer screen, is recognized as a bottleneck in automated single particle reconstruction. This paper describes an efficient approach for automated boxing of ribosome particles in micrographs. Use of a fast, anisotropic non-linear reaction-diffusion method to pre-process micrographs and rank-leveling to enhance the contrast between particles and the background, followed by binary and morphological segmentation constitute the core of this technique. Modifying the shapemore » of the particles to facilitate segmentation of individual particles within clusters and boxing the isolated particles is successfully attempted. Tests on a limited number of micrographs have shown that over 80 percent success is achieved in automatic particle picking.« less
NASA Technical Reports Server (NTRS)
Parsons, David; Smith, Andrew; Knight, Brent; Hunt, Ron; LaVerde, Bruce; Craigmyle, Ben
2012-01-01
Particle dampers provide a mechanism for diverting energy away from resonant structural vibrations. This experimental study provides data from trials to determine how effective use of these dampers might be for equipment mounted to a curved orthogrid vehicle panel. Trends for damping are examined for variations in damper fill level, component mass, and excitation energy. A significant response reduction at the component level would suggest that comparatively small, thoughtfully placed, particle dampers might be advantageously used in vehicle design. The results of this test will be compared with baseline acoustic response tests and other follow-on testing involving a range of isolation and damping methods. Instrumentation consisting of accelerometers, microphones, and still photography data will be collected to correlate with the analytical results.
A New Optical Aerosol Spectrometer
NASA Technical Reports Server (NTRS)
Fonda, Mark; Malcolmson, Andrew; Bonin, Mike; Stratton, David; Rogers, C. Fred; Chang, Sherwood (Technical Monitor)
1998-01-01
An optical particle spectrometer capable of measuring aerosol particle size distributions from 0.02 to 100 micrometers has been developed. This instrument combines several optical methods in one, in-situ configuration; it can provide continuous data collection to encompass the wide dynamic size ranges and concentrations found in studies of modeled planetary atmospheres as well as terrestrial air quality research. Currently, the system is incorporated into an eight liter capacity spherical pressure vessel that is appropriate both for flowthrough and for in-situ particle generation. The optical sizing methods include polarization ratio, The scattering, and forward scattering detectors, with illumination from a fiber-coupled, Argon-ion laser. As particle sizes increase above 0.1 micrometer, a customized electronics and software system automatically shifts from polarization to diffraction-based measurements as the angular scattering detectors attain acceptable signal-to-noise ratios. The number concentration detection limits are estimated to be in the part-per-trillion (ppT by volume) range, or roughly 1000 submicron particles per cubic centimeter. Results from static experiments using HFC134A (approved light scattering gas standard), flow-through experiments using sodium chloride (NaCl) and carbon particles, and dynamic 'Tholin' (photochemical produced particles from ultraviolet (UV)-irradiated acetylene and nitrogen) experiments have been obtained. The optical spectrometer data obtained with particles have compared well with particle sizes determined by electron microscopy. The 'Tholin' tests provided real-time size and concentration data as the particles grew from about 30 nanometers to about 0.8 micrometers, with concentrations ranging from ppT to ppB, by volume. Tests are still underway, to better define sizing accuracy and concentration limits, these results will be reported.
A New Method to Test the Einstein’s Weak Equivalence Principle
NASA Astrophysics Data System (ADS)
Yu, Hai; Xi, Shao-Qiang; Wang, Fa-Yin
2018-06-01
The Einstein’s weak equivalence principle (WEP) is one of the foundational assumptions of general relativity and some other gravity theories. In the theory of parametrized post-Newtonian (PPN), the difference between the PPN parameters γ of different particles or the same type of particle with different energies, Δγ, represents the violation of WEP. Current constraints on Δγ are derived from the observed time delay between correlated particles of astronomical sources. However, the observed time delay is contaminated by other effects, such as the time delays due to different particle emission times, the potential Lorentz invariance violation, and none-zero photon rest mass. Therefore, current constraints are only upper limits. Here, we propose a new method to test WEP based on the fact that the gravitational time delay is direction-dependent while others are not. This is the first method that can naturally correct other time-delay effects. Using the time-delay measurements of BASTE gamma-ray burst sample and the gravitational potential of local super galaxy cluster Laniakea, we find that the constraint on Δγ of different energy photons can be as low as 10‑14. In the future, if more gravitational wave events and fast radio bursts with much more precise time-delay measurements are observed, this method can give a reliable and tight constraint on WEP.
Imoto, Yukari; Yasutaka, Tetsuo; Someya, Masayuki; Higashino, Kazuo
2018-05-15
Soil leaching tests are commonly used to evaluate the leachability of hazardous materials, such as heavy metals, from the soil. Batch leaching tests often enhance soil colloidal mobility and may require solid-liquid separation procedures to remove excess soil particles. However, batch leaching test results depend on particles that can pass through a 0.45μm membrane filter and are influenced by test parameters such as centrifugal intensity and filtration volume per filter. To evaluate these parameters, we conducted batch leaching experiments using metal-contaminated soils and focused on the centrifugal intensity and filtration volume per filter used in solid-liquid separation methods currently employed in standard leaching tests. Our experiments showed that both centrifugal intensity and filtration volume per filter affected the reproducibility of batch leaching tests for some soil types. The results demonstrated that metal concentrations in the filtrates significantly differed according to the centrifugal intensity when it was 3000 g for 2h or less. Increased filtration volume per filter led to significant decreases in filtrate metal concentrations when filter cakes formed during filtration. Comparison of the filtration tests using 0.10 and 0.45μm membrane filters showed statistically significant differences in turbidity and metal concentration. These findings suggest that colloidal particles were not adequately removed from the extract and contributed substantially to the apparent metal concentrations in the leaching test of soil containing colloidal metals. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kulkarni, Sandip; Ramaswamy, Bharath; Horton, Emily; Gangapuram, Sruthi; Nacev, Alek; Depireux, Didier; Shimoji, Mika; Shapiro, Benjamin
2015-11-01
This article presents a method to investigate how magnetic particle characteristics affect their motion inside tissues under the influence of an applied magnetic field. Particles are placed on top of freshly excised tissue samples, a calibrated magnetic field is applied by a magnet underneath each tissue sample, and we image and quantify particle penetration depth by quantitative metrics to assess how particle sizes, their surface coatings, and tissue resistance affect particle motion. Using this method, we tested available fluorescent particles from Chemicell of four sizes (100 nm, 300 nm, 500 nm, and 1 μm diameter) with four different coatings (starch, chitosan, lipid, and PEG/P) and quantified their motion through freshly excised rat liver, kidney, and brain tissues. In broad terms, we found that the applied magnetic field moved chitosan particles most effectively through all three tissue types (as compared to starch, lipid, and PEG/P coated particles). However, the relationship between particle properties and their resulting motion was found to be complex. Hence, it will likely require substantial further study to elucidate the nuances of transport mechanisms and to select and engineer optimal particle properties to enable the most effective transport through various tissue types under applied magnetic fields.
Toushmalani, Reza
2013-01-01
The purpose of this study was to compare the performance of two methods for gravity inversion of a fault. First method [Particle swarm optimization (PSO)] is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. Second method [The Levenberg-Marquardt algorithm (LM)] is an approximation to the Newton method used also for training ANNs. In this paper first we discussed the gravity field of a fault, then describes the algorithms of PSO and LM And presents application of Levenberg-Marquardt algorithm, and a particle swarm algorithm in solving inverse problem of a fault. Most importantly the parameters for the algorithms are given for the individual tests. Inverse solution reveals that fault model parameters are agree quite well with the known results. A more agreement has been found between the predicted model anomaly and the observed gravity anomaly in PSO method rather than LM method.
Winston, Richard B.; Konikow, Leonard F.; Hornberger, George Z.
2018-02-16
In the traditional method of characteristics for groundwater solute-transport models, advective transport is represented by moving particles that track concentration. This approach can lead to global mass-balance problems because in models of aquifers having complex boundary conditions and heterogeneous properties, particles can originate in cells having different pore volumes and (or) be introduced (or removed) at cells representing fluid sources (or sinks) of varying strengths. Use of volume-weighted particles means that each particle tracks solute mass. In source or sink cells, the changes in particle weights will match the volume of water added or removed through external fluxes. This enables the new method to conserve mass in source or sink cells as well as globally. This approach also leads to potential efficiencies by allowing the number of particles per cell to vary spatially—using more particles where concentration gradients are high and fewer where gradients are low. The approach also eliminates the need for the model user to have to distinguish between “weak” and “strong” fluid source (or sink) cells. The new model determines whether solute mass added by fluid sources in a cell should be represented by (1) new particles having weights representing appropriate fractions of the volume of water added by the source, or (2) distributing the solute mass added over all particles already in the source cell. The first option is more appropriate for the condition of a strong source; the latter option is more appropriate for a weak source. At sinks, decisions whether or not to remove a particle are replaced by a reduction in particle weight in proportion to the volume of water removed. A number of test cases demonstrate that the new method works well and conserves mass. The method is incorporated into a new version of the U.S. Geological Survey’s MODFLOW–GWT solute-transport model.
NASA Astrophysics Data System (ADS)
Meskhidze, N.; Royalty, T. M.; Phillips, B.; Dawson, K. W.; Petters, M. D.; Reed, R.; Weinstein, J.; Hook, D.; Wiener, R.
2017-12-01
The accurate representation of aerosols in climate models requires direct ambient measurement of the size- and composition-dependent particle production fluxes. Here we present the design, testing, and analysis of data collected through the first instrument capable of measuring hygroscopicity-based, size-resolved particle fluxes using a continuous-flow Hygroscopicity-Resolved Relaxed Eddy Accumulation (Hy-Res REA) technique. The different components of the instrument were extensively tested inside the US Environmental Protection Agency's Aerosol Test Facility for sea-salt and ammoniums sulfate particle fluxes. The new REA system design does not require particle accumulation, therefore avoids the diffusional wall losses associated with long residence times of particles inside the air collectors of the traditional REA devices. The Hy-Res REA system used in this study includes a 3-D sonic anemometer, two fast-response solenoid valves, two Condensation Particle Counters (CPCs), a Scanning Mobility Particle Sizer (SMPS), and a Hygroscopicity Tandem Differential Mobility Analyzer (HTDMA). A linear relationship was found between the sea-salt particle fluxes measured by eddy covariance and REA techniques, with comparable theoretical (0.34) and measured (0.39) proportionality constants. The sea-salt particle detection limit of the Hy-Res REA flux system is estimated to be 6x105 m-2s-1. For the conditions of ammonium sulfate and sea-salt particles of comparable source strength and location, the continuous-flow Hy-Res REA instrument was able to achieve better than 90% accuracy of measuring the sea-salt particle fluxes. In principle, the instrument can be applied to measure fluxes of particles of variable size and distinct hygroscopic properties (i.e., mineral dust, black carbon, etc.).
NASA Astrophysics Data System (ADS)
Estapa, Meg; Durkin, Colleen; Buesseler, Ken; Johnson, Rod; Feen, Melanie
2017-02-01
Our mechanistic understanding of the processes controlling the ocean's biological pump is limited, in part, by our lack of observational data at appropriate timescales. The "optical sediment trap" (OST) technique utilizes a transmissometer on a quasi-Lagrangian platform to collect sedimenting particles. This method could help fill the observational gap by providing autonomous measurements of particulate carbon (PC) flux in the upper mesopelagic ocean at high spatiotemporal resolution. Here, we used a combination of field measurements and laboratory experiments to test hydrodynamic and zooplankton-swimmer effects on the OST method, and we quantitatively calibrated this method against PC flux measured directly in same-platform, neutrally buoyant sediment traps (NBSTs) during 5 monthly cruises at the Bermuda Atlantic Time-series Study (BATS) site. We found a well-correlated, positive relationship (R2=0.66, n=15) between the OST proxy, and the PC flux measured directly using NBSTs. Laboratory tests showed that scattering of light from multiple particles between the source and detector was unlikely to affect OST proxy results. We found that the carbon-specific attenuance of sinking particles was larger than literature values for smaller, suspended particles in the ocean, and consistent with variable carbon: size relationships reported in the literature for sinking particles. We also found evidence for variability in PC flux at high spatiotemporal resolution. Our results are consistent with the literature on particle carbon content and optical properties in the ocean, and support more widespread use of the OST proxy, with proper site-specific and platform-specific calibration, to better understand variability in the ocean biological pump.
Recent advances in testing of microsphere drug delivery systems.
Andhariya, Janki V; Burgess, Diane J
2016-01-01
This review discusses advances in the field of microsphere testing. In vitro release-testing methods such as sample and separate, dialysis membrane sacs and USP apparatus IV have been used for microspheres. Based on comparisons of these methods, USP apparatus IV is currently the method of choice. Accelerated in vitro release tests have been developed to shorten the testing time for quality control purposes. In vitro-in vivo correlations using real-time and accelerated release data have been developed, to minimize the need to conduct in vivo performance evaluation. Storage stability studies have been conducted to investigate the influence of various environmental factors on microsphere quality throughout the product shelf life. New tests such as the floating test and the in vitro wash-off test have been developed along with advancement in characterization techniques for other physico-chemical parameters such as particle size, drug content, and thermal properties. Although significant developments have been made in microsphere release testing, there is still a lack of guidance in this area. Microsphere storage stability studies should be extended to include microspheres containing large molecules. An agreement needs to be reached on the use of particle sizing techniques to avoid inconsistent data. An approach needs to be developed to determine total moisture content of microspheres.
NASA Astrophysics Data System (ADS)
Saari, Sampo; Karjalainen, Panu; Ntziachristos, Leonidas; Pirjola, Liisa; Matilainen, Pekka; Keskinen, Jorma; Rönkkö, Topi
2016-02-01
Particle and NOx emissions of an SCR equipped HDD truck were studied in real-world driving conditions using the "Sniffer" mobile laboratory. Real-time CO2 measurement enables emission factor calculation for NOx and particles. In this study, we compared three different emission factor calculation methods and characterised their suitability for real-world chasing experiments. The particle number emission was bimodal and dominated by the nucleation mode particles (diameter below 23 nm) having emission factor up to 1 × 1015 #/kgfuel whereas emission factor for soot (diameter above 23 nm that is consistent with the PMP standard) was typically 1 × 1014 #/kgfuel. The effect of thermodenuder on the exhaust particles indicated that the nucleation particles consisted mainly of volatile compounds, but sometimes there also existed a non-volatile core. The nucleation mode particles are not controlled by current regulations in Europe. However, these particles consistently form under atmospheric dilution in the plume of the truck and constitute a health risk for the human population that is exposed to those. Average NOx emission was 3.55 g/kWh during the test, whereas the Euro IV emission limit over transient testing is 3.5 g NOx/kWh. The on-road emission performance of the vehicle was very close to the expected levels, confirming the successful operation of the SCR system of the tested vehicle. Heavy driving conditions such as uphill driving increased both the NOx and particle number emission factors whereas the emission factor for soot particle number remains rather constant.
Evaluation of Antibacterial Effects of Silver-Coated Stainless Steel Orthodontic Brackets.
Arash, Valiollah; Keikhaee, Fatemeh; Rabiee, Sayed Mahmood; Rajabnia, Ramazan; Khafri, Soraya; Tavanafar, Saeid
2016-01-01
White spots and enamel demineralization around orthodontic brackets are among the most important complications resulting from orthodontic treatments. Since the antibacterial properties of metals and metallic particles have been well documented, the aim of this study was to assess the antibacterial effect of stainless steel orthodontic brackets coated with silver (Ag) particles. In this study, 40 standard metal brackets were divided into two groups of 20 cases and 20 controls. The brackets in the case group were coated with Ag particles using an electroplating method. Atomic force microscopy and scanning electron microscopy were used to assess the adequacy of the coating process. In addition, antibacterial tests, i.e., disk diffusion and direct contact tests were performed at three, six, 24, and 48 hours, and 15 and 30 days using a Streptococcus mutans strain. The results were analyzed using Student's t-test and repeated measures ANOVA. Analyses via SEM and AFM confirmed that excellent coatings were obtained by using an electroplating method. The groups exhibited similar behavior when subjected to the disk diffusion test in the agar medium. However, the bacterial counts of the Ag-coated brackets were, in general, significantly lower (P<0.001) than those of their non-coated counterparts. Brackets coated with Ag, via an electroplating method, exhibited antibacterial properties when placed in direct contact with Streptococcus mutans. This antibacterial effect persisted for 30 days after contact with the bacteria.
Automation of aggregate characterization using laser profiling and digital image analysis
NASA Astrophysics Data System (ADS)
Kim, Hyoungkwan
2002-08-01
Particle morphological properties such as size, shape, angularity, and texture are key properties that are frequently used to characterize aggregates. The characteristics of aggregates are crucial to the strength, durability, and serviceability of the structure in which they are used. Thus, it is important to select aggregates that have proper characteristics for each specific application. Use of improper aggregate can cause rapid deterioration or even failure of the structure. The current standard aggregate test methods are generally labor-intensive, time-consuming, and subject to human errors. Moreover, important properties of aggregates may not be captured by the standard methods due to a lack of an objective way of quantifying critical aggregate properties. Increased quality expectations of products along with recent technological advances in information technology are motivating new developments to provide fast and accurate aggregate characterization. The resulting information can enable a real time quality control of aggregate production as well as lead to better design and construction methods of portland cement concrete and hot mix asphalt. This dissertation presents a system to measure various morphological characteristics of construction aggregates effectively. Automatic measurement of various particle properties is of great interest because it has the potential to solve such problems in manual measurements as subjectivity, labor intensity, and slow speed. The main efforts of this research are placed on three-dimensional (3D) laser profiling, particle segmentation algorithms, particle measurement algorithms, and generalized particle descriptors. First, true 3D data of aggregate particles obtained by laser profiling are transformed into digital images. Second, a segmentation algorithm and a particle measurement algorithm are developed to separate particles and process each particle data individually with the aid of various kinds of digital image technologies. Finally, in order to provide a generalized, quantitative, and representative way to characterize aggregate particles, 3D particle descriptors are developed using the multi-resolution analysis feature of wavelet transforms. Verification tests show that this approach could characterize various aggregate properties in a fast, accurate, and reliable way. When implemented, this ability to automatically analyze multiple characteristics of an aggregate sample is expected to provide not only economic but also intangible strategic gains.
A Lagrangian particle method with remeshing for tracer transport on the sphere
Bosler, Peter Andrew; Kent, James; Krasny, Robert; ...
2017-03-30
A Lagrangian particle method (called LPM) based on the flow map is presented for tracer transport on the sphere. The particles carry tracer values and are located at the centers and vertices of triangular Lagrangian panels. Remeshing is applied to control particle disorder and two schemes are compared, one using direct tracer interpolation and another using inverse flow map interpolation with sampling of the initial tracer density. Test cases include a moving-vortices flow and reversing-deformational flow with both zero and nonzero divergence, as well as smooth and discontinuous tracers. We examine the accuracy of the computed tracer density and tracermore » integral, and preservation of nonlinear correlation in a pair of tracers. Here, we compare results obtained using LPM and the Lin–Rood finite-volume scheme. An adaptive particle/panel refinement scheme is demonstrated.« less
A Lagrangian particle method with remeshing for tracer transport on the sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosler, Peter Andrew; Kent, James; Krasny, Robert
A Lagrangian particle method (called LPM) based on the flow map is presented for tracer transport on the sphere. The particles carry tracer values and are located at the centers and vertices of triangular Lagrangian panels. Remeshing is applied to control particle disorder and two schemes are compared, one using direct tracer interpolation and another using inverse flow map interpolation with sampling of the initial tracer density. Test cases include a moving-vortices flow and reversing-deformational flow with both zero and nonzero divergence, as well as smooth and discontinuous tracers. We examine the accuracy of the computed tracer density and tracermore » integral, and preservation of nonlinear correlation in a pair of tracers. Here, we compare results obtained using LPM and the Lin–Rood finite-volume scheme. An adaptive particle/panel refinement scheme is demonstrated.« less
Comparison of Machine Learning methods for incipient motion in gravel bed rivers
NASA Astrophysics Data System (ADS)
Valyrakis, Manousos
2013-04-01
Soil erosion and sediment transport of natural gravel bed streams are important processes which affect both the morphology as well as the ecology of earth's surface. For gravel bed rivers at near incipient flow conditions, particle entrainment dynamics are highly intermittent. This contribution reviews the use of modern Machine Learning (ML) methods implemented for short term prediction of entrainment instances of individual grains exposed in fully developed near boundary turbulent flows. Results obtained by network architectures of variable complexity based on two different ML methods namely the Artificial Neural Network (ANN) and the Adaptive Neuro-Fuzzy Inference System (ANFIS) are compared in terms of different error and performance indices, computational efficiency and complexity as well as predictive accuracy and forecast ability. Different model architectures are trained and tested with experimental time series obtained from mobile particle flume experiments. The experimental setup consists of a Laser Doppler Velocimeter (LDV) and a laser optics system, which acquire data for the instantaneous flow and particle response respectively, synchronously. The first is used to record the flow velocity components directly upstream of the test particle, while the later tracks the particle's displacements. The lengthy experimental data sets (millions of data points) are split into the training and validation subsets used to perform the corresponding learning and testing of the models. It is demonstrated that the ANFIS hybrid model, which is based on neural learning and fuzzy inference principles, better predicts the critical flow conditions above which sediment transport is initiated. In addition, it is illustrated that empirical knowledge can be extracted, validating the theoretical assumption that particle ejections occur due to energetic turbulent flow events. Such a tool may find application in management and regulation of stream flows downstream of dams for stream restoration, implementation of sustainable practices in river and estuarine ecosystems and design of stable river bed and banks.
NASA Astrophysics Data System (ADS)
Liu, D.; Fu, X.; Liu, X.
2016-12-01
In nature, granular materials exist widely in water bodies. Understanding the fundamentals of solid-liquid two-phase flow, such as turbulent sediment-laden flow, is of importance for a wide range of applications. A coupling method combining computational fluid dynamics (CFD) and discrete element method (DEM) is now widely used for modeling such flows. In this method, when particles are significantly larger than the CFD cells, the fluid field around each particle should be fully resolved. On the other hand, the "unresolved" model is designed for the situation where particles are significantly smaller than the mesh cells. Using "unresolved" model, large amount of particles can be simulated simultaneously. However, there is a gap between these two situations when the size of DEM particles and CFD cell is in the same order of magnitude. In this work, the most commonly used void fraction models are tested with numerical sedimentation experiments. The range of applicability for each model is presented. Based on this, a new void fraction model, i.e., a modified version of "tri-linear" model, is proposed. Particular attention is paid to the smooth function of void fraction in order to avoid numerical instability. The results show good agreement with the experimental data and analytical solution for both single-particle motion and also group-particle motion, indicating great potential of the new void fraction model.
Direct numerical simulation of particulate flows with an overset grid method
NASA Astrophysics Data System (ADS)
Koblitz, A. R.; Lovett, S.; Nikiforakis, N.; Henshaw, W. D.
2017-08-01
We evaluate an efficient overset grid method for two-dimensional and three-dimensional particulate flows for small numbers of particles at finite Reynolds number. The rigid particles are discretised using moving overset grids overlaid on a Cartesian background grid. This allows for strongly-enforced boundary conditions and local grid refinement at particle surfaces, thereby accurately capturing the viscous boundary layer at modest computational cost. The incompressible Navier-Stokes equations are solved with a fractional-step scheme which is second-order-accurate in space and time, while the fluid-solid coupling is achieved with a partitioned approach including multiple sub-iterations to increase stability for light, rigid bodies. Through a series of benchmark studies we demonstrate the accuracy and efficiency of this approach compared to other boundary conformal and static grid methods in the literature. In particular, we find that fully resolving boundary layers at particle surfaces is crucial to obtain accurate solutions to many common test cases. With our approach we are able to compute accurate solutions using as little as one third the number of grid points as uniform grid computations in the literature. A detailed convergence study shows a 13-fold decrease in CPU time over a uniform grid test case whilst maintaining comparable solution accuracy.
Restrepo, John F; Garcia-Sucerquia, Jorge
2012-02-15
We present an automatic procedure for 3D tracking of micrometer-sized particles with high-NA digital lensless holographic microscopy. The method uses a two-feature approach to search for the best focal planes and to distinguish particles from artifacts or other elements on the reconstructed stream of the holograms. A set of reconstructed images is axially projected onto a single image. From the projected image, the centers of mass of all the reconstructed elements are identified. Starting from the centers of mass, the morphology of the profile of the maximum intensity along the reconstruction direction allows for the distinguishing of particles from others elements. The method is tested with modeled holograms and applied to automatically track micrometer-sized bubbles in a sample of 4 mm3 of soda.
Measuring the light scattering and orientation of a spheroidal particle using in-line holography.
Seo, Kyung Won; Byeon, Hyeok Jun; Lee, Sang Joon
2014-07-01
The light scattering properties of a horizontally and vertically oriented spheroidal particle under laser illumination are experimentally investigated using digital in-line holography. The reconstructed wave field shows the bright singular points as a result of the condensed beam formed by a transparent spheroidal particle acting as a lens. The in-plane (θ) and out-of-plane (ϕ) rotating angles of an arbitrarily oriented spheroidal particle are measured by using these scattering properties. As a feasibility test, the 3D orientation of a transparent spheroidal particle suspended in a microscale pipe flow is successfully reconstructed by adapting the proposed method.
Tailored Core Shell Cathode Powders for Solid Oxide Fuel Cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swartz, Scott
2015-03-23
In this Phase I SBIR project, a “core-shell” composite cathode approach was evaluated for improving SOFC performance and reducing degradation of lanthanum strontium cobalt ferrite (LSCF) cathode materials, following previous successful demonstrations of infiltration approaches for achieving the same goals. The intent was to establish core-shell cathode powders that enabled high performance to be obtained with “drop-in” process capability for SOFC manufacturing (i.e., rather than adding an infiltration step to the SOFC manufacturing process). Milling, precipitation and hetero-coagulation methods were evaluated for making core-shell composite cathode powders comprised of coarse LSCF “core” particles and nanoscale “shell” particles of lanthanum strontiummore » manganite (LSM) or praseodymium strontium manganite (PSM). Precipitation and hetero-coagulation methods were successful for obtaining the targeted core-shell morphology, although perfect coverage of the LSCF core particles by the LSM and PSM particles was not obtained. Electrochemical characterization of core-shell cathode powders and conventional (baseline) cathode powders was performed via electrochemical impedance spectroscopy (EIS) half-cell measurements and single-cell SOFC testing. Reliable EIS testing methods were established, which enabled comparative area-specific resistance measurements to be obtained. A single-cell SOFC testing approach also was established that enabled cathode resistance to be separated from overall cell resistance, and for cathode degradation to be separated from overall cell degradation. The results of these EIS and SOFC tests conclusively determined that the core-shell cathode powders resulted in significant lowering of performance, compared to the baseline cathodes. Based on the results of this project, it was concluded that the core-shell cathode approach did not warrant further investigation.« less
Novel Method of Aluminum to Copper Bonding by Cold Spray
NASA Astrophysics Data System (ADS)
Fu, Si-Lin; Li, Cheng-Xin; Wei, Ying-Kang; Luo, Xiao-Tao; Yang, Guan-Jun; Li, Chang-Jiu; Li, Jing-Long
2018-04-01
Cold spray bonding (CSB) has been proposed as a new method for joining aluminum and copper. At high speeds, solid Al particles impacted the groove between the two substrates to form a bond between Al and Cu. Compared to traditional welding technologies, CSB does not form distinct intermetallic compounds. Large stainless steel particles were introduced into the spray powders as in situ shot peen particles to create a dense Al deposit and to improve the bond strength of joints. It was discovered that introducing shot peen particles significantly improved the flattening ratio of the deposited Al particles. Increasing the proportion of shot peen particles from 0 to 70 vol.% decreased the porosity of the deposits from 12.4 to 0.2%, while the shear strength of joints significantly increased. The tensile test results of the Al-Cu joints demonstrated that cracks were initiated at the interface between the Al and the deposit. The average tensile strength was 71.4 MPa and could reach 81% of the tensile strength of pure Al.
Basic problems and new potentials in monitoring sediment transport using Japanese pipe type geophone
NASA Astrophysics Data System (ADS)
Sakajo, Saiichi
2016-04-01
The authors have conducted a lot of series of monitoring of sediment transport by pipe type geophone in a model hydrological channel with various gradients and water discharge, using the various size of particles from 2 to 21 mm in the diameter. In the case of casting soils particle by particle into the water channel, 1,000 test cases were conducted. In the case of casting all soils at a breath into the water channel, 100 test cases were conducted. The all test results were totally analyzed by the conventional method, with visible judgement by video pictures. Then several important basic problems were found in estimating the volume and particle distributions by the conventional method, which was not found in the past similar studies. It was because the past studies did not consider the types of collisions between sediment particle and pipe. Based on these experiments, the authors have firstly implemented this idea into the old formula to estimate the amount of sediment transport. In the formula, two factors of 1) the rate of sensing in a single collision and 2) the rate of collided particles to a cast all soil particles were concretely considered. The parameters of these factors could be determined from the experimental results and it was found that the obtained formula could estimate grain size distribution. In this paper, they explain the prototype formula to estimate a set of volume and distribution of sediment transport. Another finding in this study is to propose a single collision as a river index to recognize its characteristics of sediment transport. This result could characterize the risk ranking of sediment transport in the rivers and mudflow in the mountainous rivers. Furthermore, in this paper the authors explain how the preciseness of the pipe geophone to sense the smaller sediment particles shall be improved, which has never been able to be sensed.
NASA Astrophysics Data System (ADS)
V. R., Arun prakash; Rajadurai, A.
2016-10-01
In this present work hybrid polymer (epoxy) matrix composite has been strengthened with surface modified E-glass fiber and iron(III) oxide particles with varying size. The particle sizes of 200 nm and <100 nm has been prepared by high energy ball milling and sol-gel methods respectively. To enhance better dispersion of particles and improve adhesion of fibers and fillers with epoxy matrix surface modification process has been done on both fiber and filler by an amino functional silane 3-Aminopropyltrimethoxysilane (APTMS). Crystalline and functional groups of siliconized iron(III) oxide particles were characterized by XRD and FTIR spectroscopy analysis. Fixed quantity of surface treated 15 vol% E-glass fiber was laid along with 0.5 and 1.0 vol% of iron(III) oxide particles into the matrix to fabricate hybrid composites. The composites were cured by an aliphatic hardener Triethylenetetramine (TETA). Effectiveness of surface modified particles and fibers addition into the resin matrix were revealed by mechanical testing like tensile testing, flexural testing, impact testing, inter laminar shear strength and hardness. Thermal behavior of composites was evaluated by TGA, DSC and thermal conductivity (Lee's disc). The scanning electron microscopy was employed to found shape and size of iron(III) oxide particles adhesion quality of fiber with epoxy matrix. Good dispersion of fillers in matrix was achieved with surface modifier APTMS. Tensile, flexural, impact and inter laminar shear strength of composites was improved by reinforcing surface modified fiber and filler. Thermal stability of epoxy resin was improved when surface modified fiber was reinforced along with hard hematite particles. Thermal conductivity of epoxy increased with increase of hematite content in epoxy matrix.
Statistical analysis of secondary particle distributions in relativistic nucleus-nucleus collisions
NASA Technical Reports Server (NTRS)
Mcguire, Stephen C.
1987-01-01
The use is described of several statistical techniques to characterize structure in the angular distributions of secondary particles from nucleus-nucleus collisions in the energy range 24 to 61 GeV/nucleon. The objective of this work was to determine whether there are correlations between emitted particle intensity and angle that may be used to support the existence of the quark gluon plasma. The techniques include chi-square null hypothesis tests, the method of discrete Fourier transform analysis, and fluctuation analysis. We have also used the method of composite unit vectors to test for azimuthal asymmetry in a data set of 63 JACEE-3 events. Each method is presented in a manner that provides the reader with some practical detail regarding its application. Of those events with relatively high statistics, Fe approaches 0 at 55 GeV/nucleon was found to possess an azimuthal distribution with a highly non-random structure. No evidence of non-statistical fluctuations was found in the pseudo-rapidity distributions of the events studied. It is seen that the most effective application of these methods relies upon the availability of many events or single events that possess very high multiplicities.
Efficiency tests of samplers for microbiological aerosols, a review
NASA Technical Reports Server (NTRS)
Henningson, E.; Faengmark, I.
1984-01-01
To obtain comparable results from studies using a variety of samplers of microbiological aerosols with different collection performances for various particle sizes, methods reported in the literature were surveyed, evaluated, and tabulated for testing the efficiency of the samplers. It is concluded that these samplers were not thoroughly tested, using reliable methods. Tests were conducted in static air chambers and in various outdoor and work environments. Results are not reliable as it is difficult to achieve stable and reproducible conditions in these test systems. Testing in a wind tunnel is recommended.
Lowers, Heather; Breit, George N.; Strand, Matthew; Pillers, Renee M.; Meeker, Gregory P.; Todorov, Todor I.; Plumlee, Geoffrey S.; Wolf, Ruth E.; Robinson, Maura; Parr, Jane; Miller, Robert J.; Groshong, Steve; Green, Francis; Rose, Cecile
2018-01-01
Humans accumulate large numbers of inorganic particles in their lungs over a lifetime. Whether this causes or contributes to debilitating disease over a normal lifespan depends on the type and concentration of the particles. We developed and tested a protocol for in situ characterization of the types and distribution of inorganic particles in biopsied lung tissue from three human groups using field emission scanning electron microscopy (FE-SEM) combined with energy dispersive spectroscopy (EDS). Many distinct particle types were recognized among the 13 000 particles analyzed. Silica, feldspars, clays, titanium dioxides, iron oxides and phosphates were the most common constituents in all samples. Particles were classified into three general groups: endogenous, which form naturally in the body; exogenic particles, natural earth materials; and anthropogenic particles, attributed to industrial sources. These in situ results were compared with those using conventional sodium hypochlorite tissue digestion and particle filtration. With the exception of clays and phosphates, the relative abundances of most common particle types were similar in both approaches. Nonetheless, the digestion/filtration method was determined to alter the texture and relative abundances of some particle types. SEM/EDS analysis of digestion filters could be automated in contrast to the more time intensive in situ analyses.
Mohr, Martin; Forss, Anna-Maria; Lehmann, Urs
2006-04-01
Tail pipe particle emissions of passenger cars, with different engine and aftertreatment technologies, were determined with special focus on diesel engines equipped with a particle filter. The particle number measurements were performed, during transient tests, using a condensation particle counter. The measurement procedure complied with the draft Swiss ordinance, which is based on the findings of the UN/ECE particulate measurement program. In addition, particle mass emissions were measured by the legislated and a modified filter method. The results demonstrate the high efficiency of diesel particle filters (DPFs) in curtailing nonvolatile particle emissions over the entire size range. Higher emissions were observed during short periods of DPF regeneration and immediately afterward, when a soot cake has not yet formed on the filter surface. The gasoline vehicles exhibited higher emissions than the DPF equipped diesel vehicles but with a large variation depending on the technology and driving conditions. Although particle measurements were carried out during DPF regeneration, it was impossible to quantify their contribution to the overall emissions, due to the wide variation in intensity and frequency of regeneration. The numbers counting method demonstrated its clear superiority in sensitivity to the mass measurement. The results strongly suggest the application of the particle number counting to quantify future low tailpipe emissions.
Dynamic Monitoring of Cleanroom Fallout Using an Air Particle Counter
NASA Technical Reports Server (NTRS)
Perry, Radford
2011-01-01
The particle fallout limitations and periodic allocations for the James Webb Space Telescope are very stringent. Standard prediction methods are complicated by non-linearity and monitoring methods that are insufficiently responsive. A method for dynamically predicting the particle fallout in a cleanroom using air particle counter data was determined by numerical correlation. This method provides a simple linear correlation to both time and air quality, which can be monitored in real time. The summation of effects provides the program better understanding of the cleanliness and assists in the planning of future activities. Definition of fallout rates within a cleanroom during assembly and integration of contamination-sensitive hardware, such as the James Webb Space Telescope, is essential for budgeting purposes. Balancing the activity levels for assembly and test with the particle accumulation rate is paramount. The current approach to predicting particle fallout in a cleanroom assumes a constant air quality based on the rated class of a cleanroom, with adjustments for projected work or exposure times. Actual cleanroom class can also depend on the number of personnel present and the type of activities. A linear correlation of air quality and normalized particle fallout was determined numerically. An air particle counter (standard cleanroom equipment) can be used to monitor the air quality on a real-time basis and determine the "class" of the cleanroom (per FED-STD-209 or ISO-14644). The correlation function provides an area coverage coefficient per class-hour of exposure. The prediction of particle accumulations provides scheduling inputs for activity levels and cleanroom class requirements.
NASA Astrophysics Data System (ADS)
Jernström, J.; Eriksson, M.; Simon, R.; Tamborini, G.; Bildstein, O.; Marquez, R. Carlos; Kehl, S. R.; Hamilton, T. F.; Ranebo, Y.; Betti, M.
2006-08-01
Six plutonium-containing particles stemming from Runit Island soil (Marshall Islands) were characterized by non-destructive analytical and microanalytical methods. Composition and elemental distribution in the particles were studied with synchrotron radiation based micro X-ray fluorescence spectrometry. Scanning electron microscope equipped with energy dispersive X-ray detector and with wavelength dispersive system as well as a secondary ion mass spectrometer were used to examine particle surfaces. Based on the elemental composition the particles were divided into two groups: particles with pure Pu matrix, and particles where the plutonium is included in Si/O-rich matrix being more heterogenously distributed. All of the particles were identified as nuclear fuel fragments of exploded weapon components. As containing plutonium with low 240Pu/ 239Pu atomic ratio, less than 0.065, which corresponds to weapons-grade plutonium or a detonation with low fission yield, the particles were identified to originate from the safety test and low-yield tests conducted in the history of Runit Island. The Si/O-rich particles contained traces of 137Cs ( 239 + 240 Pu/ 137Cs activity ratio higher than 2500), which indicated that a minor fission process occurred during the explosion. The average 241Am/ 239Pu atomic ratio in the six particles was 3.7 × 10 - 3 ± 0.2 × 10 - 3 (February 2006), which indicated that plutonium in the different particles had similar age.
A coupled Eulerian/Lagrangian method for the solution of three-dimensional vortical flows
NASA Technical Reports Server (NTRS)
Felici, Helene Marie
1992-01-01
A coupled Eulerian/Lagrangian method is presented for the reduction of numerical diffusion observed in solutions of three-dimensional rotational flows using standard Eulerian finite-volume time-marching procedures. A Lagrangian particle tracking method using particle markers is added to the Eulerian time-marching procedure and provides a correction of the Eulerian solution. In turn, the Eulerian solutions is used to integrate the Lagrangian state-vector along the particles trajectories. The Lagrangian correction technique does not require any a-priori information on the structure or position of the vortical regions. While the Eulerian solution ensures the conservation of mass and sets the pressure field, the particle markers, used as 'accuracy boosters,' take advantage of the accurate convection description of the Lagrangian solution and enhance the vorticity and entropy capturing capabilities of standard Eulerian finite-volume methods. The combined solution procedures is tested in several applications. The convection of a Lamb vortex in a straight channel is used as an unsteady compressible flow preservation test case. The other test cases concern steady incompressible flow calculations and include the preservation of turbulent inlet velocity profile, the swirling flow in a pipe, and the constant stagnation pressure flow and secondary flow calculations in bends. The last application deals with the external flow past a wing with emphasis on the trailing vortex solution. The improvement due to the addition of the Lagrangian correction technique is measured by comparison with analytical solutions when available or with Eulerian solutions on finer grids. The use of the combined Eulerian/Lagrangian scheme results in substantially lower grid resolution requirements than the standard Eulerian scheme for a given solution accuracy.
Egido, J M; Viñuelas, J
1997-01-01
We report a rapid method for the flow cytometric quantitation of phagocytosis in heparinized complete peripheral blood (HCPB), using commercially available phycoerythrin-conjugated latex particles of 1 micron diameter. The method is faster and shows greater reproducibility than Bjerknes' (1984) standard technique using propidium iodide-stained Candida albicans, conventionally applied to the leukocytic layer of peripheral blood but here modified for HCPB. We also report a modification of Bjerknes' Intracellular Killing Test to allow its application to HCPB.
NASA Technical Reports Server (NTRS)
Johnson, Paul E.; Smith, Milton O.; Adams, John B.
1992-01-01
Algorithms were developed, based on Hapke's (1981) equations, for remote determinations of mineral abundances and particle sizes from reflectance spectra. In this method, spectra are modeled as a function of end-member abundances and illumination/viewing geometry. The method was tested on a laboratory data set. It is emphasized that, although there exist more sophisticated models, the present algorithms are particularly suited for remotely sensed data, where little opportunity exists to independently measure reflectance versus article size and phase function.
Niskanen, Ilpo; Räty, Jukka; Peiponen, Kai-Erik
2017-07-01
This is a feasibility study of a modified immersion liquid technique for determining the refractive index of micro-sized particles. The practical challenge of the traditional liquid immersion method is to find or produce a suitable host liquid whose refractive index equals that of a solid particle. Usually, the immersion liquid method uses a set of immersion liquids with different refractive indices or continuously mixes two liquids with different refractive indices, e.g., using a pumping system. Here, the phenomenon of liquid evaporation has been utilized in defining the time-dependent refractive index variation of the host liquid. From the spectral transmittance data measured during the evaporation process, the refractive index of a solid particle in the host liquid can be determined as a function of the wavelength. The method was tested using calcium fluoride (CaF 2 ) particles with an immersion liquid mixed from diethyl ether and diffusion pump fluid. The dispersion data obtained were consistent with the literature values thus indicating the proper functioning of the proposed procedure.
NASA Astrophysics Data System (ADS)
Hamran, Noramirah; Rashid, Azura A.
2017-07-01
Commercial fillers such as silica and carbon black generally impart the reinforcing effects in dry rubber compound, but have an adverse effect on Natural rubber (NR) latex compounds. The addition of commercial fillers in NR latex has reduced the mechanical properties of NR latex films due to the destabilization effect in the NR latex compounds which govern by the dispersion quality, particle size and also the pH of the dispersion itself. The ball milling process is the conventional meth od of preparation of dispersions and ultrasonic has successfully used in preparation of nano fillers such as carbon nanotube (CNT). In this study the combination between the conventional methods; ball milling together the ultrasonic method were used to prepare the silica and carbon black dispersions. The different duration of ball milling (24, 48 and 72 hours) was compared with the ultrasonic method (30, 60, 90 and 120 minutes). The combination of ball milling and ultrasonic from the optimum individual technique was used to investigate the reduction of particle size of the fillers. The particle size analyzer, transmission electron microscopy (TEM) and scanning electron microscopy (SEM) test were carried out to investigate the obtained particle size and the tensile and tear test were carried out to investigate the mechanical properties of the NR latex films. The reduction of filler particle size is expected to impart the properties of NR latex films.
Li, Desheng
2014-01-01
This paper proposes a novel variant of cooperative quantum-behaved particle swarm optimization (CQPSO) algorithm with two mechanisms to reduce the search space and avoid the stagnation, called CQPSO-DVSA-LFD. One mechanism is called Dynamic Varying Search Area (DVSA), which takes charge of limiting the ranges of particles' activity into a reduced area. On the other hand, in order to escape the local optima, Lévy flights are used to generate the stochastic disturbance in the movement of particles. To test the performance of CQPSO-DVSA-LFD, numerical experiments are conducted to compare the proposed algorithm with different variants of PSO. According to the experimental results, the proposed method performs better than other variants of PSO on both benchmark test functions and the combinatorial optimization issue, that is, the job-shop scheduling problem.
Three-dimensional particle tracking velocimetry algorithm based on tetrahedron vote
NASA Astrophysics Data System (ADS)
Cui, Yutong; Zhang, Yang; Jia, Pan; Wang, Yuan; Huang, Jingcong; Cui, Junlei; Lai, Wing T.
2018-02-01
A particle tracking velocimetry algorithm based on tetrahedron vote, which is named TV-PTV, is proposed to overcome the limited selection problem of effective algorithms for 3D flow visualisation. In this new cluster-matching algorithm, tetrahedrons produced by the Delaunay tessellation are used as the basic units for inter-frame matching, which results in a simple algorithmic structure of only two independent preset parameters. Test results obtained using the synthetic test image data from the Visualisation Society of Japan show that TV-PTV presents accuracy comparable to that of the classical algorithm based on new relaxation method (NRX). Compared with NRX, TV-PTV possesses a smaller number of loops in programming and thus a shorter computing time, especially for large particle displacements and high particle concentration. TV-PTV is confirmed practically effective using an actual 3D wake flow.
NASA Astrophysics Data System (ADS)
Gassmöller, Rene; Bangerth, Wolfgang
2016-04-01
Particle-in-cell methods have a long history and many applications in geodynamic modelling of mantle convection, lithospheric deformation and crustal dynamics. They are primarily used to track material information, the strain a material has undergone, the pressure-temperature history a certain material region has experienced, or the amount of volatiles or partial melt present in a region. However, their efficient parallel implementation - in particular combined with adaptive finite-element meshes - is complicated due to the complex communication patterns and frequent reassignment of particles to cells. Consequently, many current scientific software packages accomplish this efficient implementation by specifically designing particle methods for a single purpose, like the advection of scalar material properties that do not evolve over time (e.g., for chemical heterogeneities). Design choices for particle integration, data storage, and parallel communication are then optimized for this single purpose, making the code relatively rigid to changing requirements. Here, we present the implementation of a flexible, scalable and efficient particle-in-cell method for massively parallel finite-element codes with adaptively changing meshes. Using a modular plugin structure, we allow maximum flexibility of the generation of particles, the carried tracer properties, the advection and output algorithms, and the projection of properties to the finite-element mesh. We present scaling tests ranging up to tens of thousands of cores and tens of billions of particles. Additionally, we discuss efficient load-balancing strategies for particles in adaptive meshes with their strengths and weaknesses, local particle-transfer between parallel subdomains utilizing existing communication patterns from the finite element mesh, and the use of established parallel output algorithms like the HDF5 library. Finally, we show some relevant particle application cases, compare our implementation to a modern advection-field approach, and demonstrate under which conditions which method is more efficient. We implemented the presented methods in ASPECT (aspect.dealii.org), a freely available open-source community code for geodynamic simulations. The structure of the particle code is highly modular, and segregated from the PDE solver, and can thus be easily transferred to other programs, or adapted for various application cases.
Preparation for Scaling Studies of Ice-Crystal Icing at the NRC Research Altitude Test Facility
NASA Technical Reports Server (NTRS)
Struk, Peter M.; Bencic, Timothy J.; Tsao, Jen-Ching; Fuleki, Dan; Knezevici, Daniel C.
2013-01-01
This paper describes experiments conducted at the National Research Council (NRC) of Canadas Research Altitiude Test Facility between March 26 and April 11, 2012. The tests, conducted collaboratively between NASA and NRC, focus on three key aspects in preparation for later scaling work to be conducted with a NACA 0012 airfoil model in the NRC Cascade rig: (1) cloud characterization, (2) scaling model development, and (3) ice-shape profile measurements. Regarding cloud characterization, the experiments focus on particle spectra measurements using two shadowgraphy methods, cloud uniformity via particle scattering from a laser sheet, and characterization of the SEA Multi-Element probe. Overviews of each aspect as well as detailed information on the diagnostic method are presented. Select results from the measurements and interpretation are presented which will help guide future work.
NASA Astrophysics Data System (ADS)
Lu, Zheng; Chen, Xiaoyi; Zhou, Ying
2018-04-01
A particle tuned mass damper (PTMD) is a creative combination of a widely used tuned mass damper (TMD) and an efficient particle damper (PD) in the vibration control area. The performance of a one-storey steel frame attached with a PTMD is investigated through free vibration and shaking table tests. The influence of some key parameters (filling ratio of particles, auxiliary mass ratio, and particle density) on the vibration control effects is investigated, and it is shown that the attenuation level significantly depends on the filling ratio of particles. According to the experimental parametric study, some guidelines for optimization of the PTMD that mainly consider the filling ratio are proposed. Furthermore, an approximate analytical solution based on the concept of an equivalent single-particle damper is proposed, and it shows satisfied agreement between the simulation and experimental results. This simplified method is then used for the preliminary optimal design of a PTMD system, and a case study of a PTMD system attached to a five-storey steel structure following this optimization process is presented.
Modeling compressible multiphase flows with dispersed particles in both dense and dilute regimes
NASA Astrophysics Data System (ADS)
McGrath, T.; St. Clair, J.; Balachandar, S.
2018-05-01
Many important explosives and energetics applications involve multiphase formulations employing dispersed particles. While considerable progress has been made toward developing mathematical models and computational methodologies for these flows, significant challenges remain. In this work, we apply a mathematical model for compressible multiphase flows with dispersed particles to existing shock and explosive dispersal problems from the literature. The model is cast in an Eulerian framework, treats all phases as compressible, is hyperbolic, and satisfies the second law of thermodynamics. It directly applies the continuous-phase pressure gradient as a forcing function for particle acceleration and thereby retains relaxed characteristics for the dispersed particle phase that remove the constituent material sound velocity from the eigenvalues. This is consistent with the expected characteristics of dispersed particle phases and can significantly improve the stable time-step size for explicit methods. The model is applied to test cases involving the shock and explosive dispersal of solid particles and compared to data from the literature. Computed results compare well with experimental measurements, providing confidence in the model and computational methods applied.
Development of RWHet to Simulate Contaminant Transport in Fractured Porous Media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yong; LaBolle, Eric; Reeves, Donald M
2012-07-01
Accurate simulation of matrix diffusion in regional-scale dual-porosity and dual-permeability media is a critical issue for the DOE Underground Test Area (UGTA) program, given the prevalence of fractured geologic media on the Nevada National Security Site (NNSS). Contaminant transport through regional-scale fractured media is typically quantified by particle-tracking based Lagrangian solvers through the inclusion of dual-domain mass transfer algorithms that probabilistically determine particle transfer between fractures and unfractured matrix blocks. UGTA applications include a wide variety of fracture aperture and spacing, effective diffusion coefficients ranging four orders of magnitude, and extreme end member retardation values. This report incorporates the currentmore » dual-domain mass transfer algorithms into the well-known particle tracking code RWHet [LaBolle, 2006], and then tests and evaluates the updated code. We also develop and test a direct numerical simulation (DNS) approach to replace the classical transfer probability method in characterizing particle dynamics across the fracture/matrix interface. The final goal of this work is to implement the algorithm identified as most efficient and effective into RWHet, so that an accurate and computationally efficient software suite can be built for dual-porosity/dual-permeability applications. RWHet is a mature Lagrangian transport simulator with a substantial user-base that has undergone significant development and model validation. In this report, we also substantially tested the capability of RWHet in simulating passive and reactive tracer transport through regional-scale, heterogeneous media. Four dual-domain mass transfer methodologies were considered in this work. We first developed the empirical transfer probability approach proposed by Liu et al. [2000], and coded it into RWHet. The particle transfer probability from one continuum to the other is proportional to the ratio of the mass entering the other continuum to the mass in the current continuum. Numerical examples show that this method is limited to certain ranges of parameters, due to an intrinsic assumption of an equilibrium concentration profile in the matrix blocks in building the transfer probability. Subsequently, this method fails in describing mass transfer for parameter combinations that violate this assumption, including small diffusion coefficients (i.e., the free-water molecular diffusion coefficient 1×10-11 meter2/second), relatively large fracture spacings (such as meter), and/or relatively large matrix retardation coefficients (i.e., ). These “outliers” in parameter range are common in UGTA applications. To address the above limitations, we then developed a Direct Numerical Simulation (DNS)-Reflective method. The novel DNS-Reflective method can directly track the particle dynamics across the fracture/matrix interface using a random walk, without any empirical assumptions. This advantage should make the DNS-Reflective method feasible for a wide range of parameters. Numerical tests of the DNS-Reflective, however, show that the method is computationally very demanding, since the time step must be very small to resolve particle transfer between fractures and matrix blocks. To improve the computational efficiency of the DNS approach, we then adopted Roubinet et al.’s method [2009], which uses first passage time distributions to simulate dual-domain mass transfer. The DNS-Roubinet method was found to be computationally more efficient than the DNS-Reflective method. It matches the analytical solution for the whole range of major parameters (including diffusion coefficient and fracture aperture values that are considered “outliers” for Liu et al.’s transfer probability method [2000]) for a single fracture system. The DNS-Roubinet method, however, has its own disadvantage: for a parallel fracture system, the truncation of the first passage time distribution creates apparent errors when the fracture spacing is small, and thus it tends to erroneously predict breakthrough curves (BTCs) for the parallel fracture system. Finally, we adopted the transient range approach proposed by Pan and Bodvarsson [2002] in RWHet. In this method, particle transfer between fractures and matrix blocks can be resolved without using very small time steps. It does not use any truncation of the first passage time distribution for particles. Hence it does not have the limitation identified above for the DNS-Reflective method and the DNS-Roubinet method. Numerical results were checked against analytical solutions, and also compared to DCPTV2.0 [Pan, 2002]. This version of RWHet (called RWHet-Pan&Bodvarsson in this report) can accurately capture contaminant transport in fractured porous media for a full range of parameters without any practical or theoretical limitations.« less
Lanthanide-labeled clay: A new method for tracing sediment transport in Karst
Mahler, B.J.; Bennett, P.C.; Zimmerman, M.
1998-01-01
Mobile sediment is a fundamental yet poorly characterized aspect of mass transport through karst aquifers. Here the development and field testing of an extremely sensitive particle tracer that may be used to characterize sediment transport in karst aquifers is described. The tracer consists of micron-size montmorillonite particles homoionized to the lanthanide form; after injection and retrieval from a ground water system, the lanthanide ions are chemically stripped from the clay and quantified by high performance liquid chromatography. The tracer meets the following desired criteria: low detection limit; a number of differentiable signatures; inexpensive production and quantification using standard methods; no environmental risks; and hydrodynamic properties similar to the in situ sediment it is designed to trace. The tracer was tested in laboratory batch experiments and field tested in both surface water and ground water systems. In surface water, arrival times of the tracer were similar to those of a conservative water tracer, although a significant amount of material was lost due to settling. Two tracer tests were undertaken in a karst aquifer under different flow conditions. Under normal flow conditions, the time of arrival and peak concentration of the tracer were similar to or preceded that of a conservative water tracer. Under low flow conditions, the particle tracer was not detected, suggesting that in low flow the sediment settles out of suspension and goes into storage.Mobile sediment is a fundamental yet poorly characterized aspect of mass transport through karst aquifers. Here the development and field testing of an extremely sensitive particle tracer that may be used to characterize sediment transport in karst aquifers is described. The tracer consists of micron-size montmorillonite particles homoionized to the lanthanide form; after injection and retrieval from a ground water system, the lanthanide ions are chemically stripped from the clay and quantified by high performance liquid chromatography. The tracer meets the following desired criteria: low detection limit; a number of differentiable signatures; inexpensive production and quantification using standard methods; no environmental risks; and hydrodynamic properties similar to the in situ sediment it is designed to trace. The tracer was tested in laboratory batch experiments and field tested in both surface water and ground water systems. In surface water, arrival times of the tracer were similar to those of a conservative water tracer, although a significant amount of material was lost due to settling. Two tracer tests were undertaken in a karst aquifer under different flow conditions. Under normal flow conditions, the time of arrival and peak concentration of the tracer were similar to or preceded that of a conservative water tracer. Under low flow conditions, the particle tracer was not detected, suggesting that in low flow the sediment settles out of suspension and goes into storage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favata, Marc
2011-01-15
Barack and Sago [Phys. Rev. Lett. 102, 191101 (2009)] have recently computed the shift of the innermost stable circular orbit (ISCO) of the Schwarzschild spacetime due to the conservative self-force that arises from the finite-mass of an orbiting test-particle. This calculation of the ISCO shift is one of the first concrete results of the self-force program, and provides an exact (fully relativistic) point of comparison with approximate post-Newtonian (PN) computations of the ISCO. Here this exact ISCO shift is compared with nearly all known PN-based methods. These include both 'nonresummed' and 'resummed' approaches (the latter reproduce the test-particle limit bymore » construction). The best agreement with the exact (Barack-Sago) result is found when the pseudo-4PN coefficient of the effective-one-body (EOB) metric is fit to numerical relativity simulations. However, if one considers uncalibrated methods based only on the currently known 3PN-order conservative dynamics, the best agreement is found from the gauge-invariant ISCO condition of Blanchet and Iyer [Classical Quantum Gravity 20, 755 (2003)], which relies only on the (nonresummed) 3PN equations of motion. This method reproduces the exact test-particle limit without any resummation. A comparison of PN methods with the ISCO in the equal-mass case (computed via sequences of numerical relativity initial-data sets) is also performed. Here a (different) nonresummed method also performs very well (as was previously shown). These results suggest that the EOB approach - while exactly incorporating the conservative test-particle dynamics and having several other important advantages - does not (in the absence of calibration) incorporate conservative self-force effects more accurately than standard PN methods. I also consider how the conservative self-force ISCO shift, combined in some cases with numerical relativity computations of the ISCO, can be used to constrain our knowledge of (1) the EOB effective metric, (2) phenomenological inspiral-merger-ringdown templates, and (3) 4PN- and 5PN-order terms in the PN orbital energy. These constraints could help in constructing better gravitational-wave templates. Lastly, I suggest a new method to calibrate unknown PN terms in inspiral templates using numerical-relativity calculations.« less
Neutron Radiography of Fluid Flow for Geothermal Energy Research
NASA Astrophysics Data System (ADS)
Bingham, P.; Polsky, Y.; Anovitz, L.; Carmichael, J.; Bilheux, H.; Jacobsen, D.; Hussey, D.
Enhanced geothermal systems seek to expand the potential for geothermal energy by engineering heat exchange systems within the earth. A neutron radiography imaging method has been developed for the study of fluid flow through rock under environmental conditions found in enhanced geothermal energy systems. For this method, a pressure vessel suitable for neutron radiography was designed and fabricated, modifications to imaging instrument setups were tested, multiple contrast agents were tested, and algorithms developed for tracking of flow. The method has shown success for tracking of single phase flow through a manufactured crack in a 3.81 cm (1.5 inch) diameter core within a pressure vessel capable of confinement up to 69 MPa (10,000 psi) using a particle tracking approach with bubbles of fluorocarbon-based fluid as the ;particles; and imaging with 10 ms exposures.
An estimation methode for measurement of ultraviolet radiation during nondestructive testing
NASA Astrophysics Data System (ADS)
Hosseinipanah, M.; Movafeghi, A.; Farvadin, D.
2018-04-01
Dye penetrant testing and magnetic particle testing are among conventional NDT methods. For increased sensitivity, fluorescence dyes and particles can be used with ultraviolet (black) lights. UV flaw detection lights have different spectra. With the help of photo-filters, the output lights are transferred to UV-A and visible zones. UV-A light can be harmful to human eyes in some conditions. In this research, UV intensity and spectrum were obtained by a Radio-spectrometer for two different UV flaw detector lighting systems. According to the standards such as ASTM E709, UV intensity must be at least 10 W/m2 at a distance of 30 cm. Based on our measurements; these features not achieved in some lamps. On the other hand, intensity and effective intensity of UV lights must be below the some limits for prevention of unprotected eye damage. NDT centers are usually using some type of UV measuring devices. A method for the estimation of effective intensity of UV light has been proposed in this research.
Isolation of genomic DNA using magnetic cobalt ferrite and silica particles.
Prodelalová, Jana; Rittich, Bohuslav; Spanová, Alena; Petrová, Katerina; Benes, Milan J
2004-11-12
Adsorption separation techniques as an alternative to laborious traditional methods (e.g., based on phenol extraction procedure) have been applied for DNA purification. In this work we used two types of particles: silica and cobalt ferrite (unmodified or modified with a reagent containing weakly basic aminoethyl groups, aminophenyl groups, or alginic acid). DNA from chicken erythrocytes and DNA isolated from bacteria Lactococcus lactis were used for testing of adsorption/desorption properties of particles. The cobalt ferrite particles modified with different reagents were used for isolation of PCR-ready bacterial DNA from different dairy products.
Study on EM-parameters and EM-wave absorption properties of materials with bio-flaky particles added
NASA Astrophysics Data System (ADS)
Zhang, Wenqiang; Zhang, Deyuan; Xu, Yonggang; McNaughton, Ryan
2016-01-01
Bio-flaky particles, fabricated through deposition of carbonyl iron on the surface of disk shaped diatomite, demonstrated beneficial performance on electromagnetic parameters. This paper will detail the improvements to the electromagnetic parameters and absorbing properties of traditional absorbing material generated by the addition of bio-flaky particles. Composites' electromagnetic parameters were measured using the transmission method. Calculated test results confirmed with bio-flaky particles were added, composites' permittivity increased due to the high permeability of bio-flaky particles. Secondly, the permeability of composites increased as a result of the increased volume content of iron particles. Composites with bio-flaky particles added exhibited superlative absorption properties at 0.5 mm thickness, with a maximum reflection loss of approximately -5.1 dB at 14.4 GHz.
2016-04-01
characterized by different methods such as Scanning Electron Microscopy (SEM) or Transmission Electron Microscopy (TEM) and other methods . ERDC SR-16...the surface coating and substrate material used. Adaptations to this test method can be used with a range of nanomaterial / polymer products in which...material rather than the presence of nanomaterial (Golanski et al. 2011). After particles are released, proper characterization is essential to
40 CFR 798.4350 - Inhalation developmental toxicity study.
Code of Federal Regulations, 2010 CFR
2010-07-01
... of the test substance. It is used to compare particles of different sizes, shapes, and densities and... substance given daily per unit volume of air. (c) Principle of the test method. The test substance is...) The temperature at which the test is performed should be maintained at 22 °C (±2°) for rodents or 20...
2014-01-24
8, Automatic Particle Counter, cleanliness, free water, Diesel 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT none 18. NUMBER OF...aircraft, or up to 10 mg/L for product used as a diesel product for ground use (1). Free water contamination (droplets) may appear as fine droplets or...published several methods and test procedures for the calibration and use of automatic particle counters. The transition of this technology to the fuel
A regularized vortex-particle mesh method for large eddy simulation
NASA Astrophysics Data System (ADS)
Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.
2017-11-01
We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.
NASA Technical Reports Server (NTRS)
Hughes, David; Dazzo, Tony
2007-01-01
This viewgraph presentation reviews the use of particle analysis to assist in preparing for the 4th Hubble Space Telescope (HST) Servicing mission. During this mission the Space Telescope Imaging Spectrograph (STIS) will be repaired. The particle analysis consisted of Finite element mesh creation, Black-body viewfactors generated using I-DEAS TMG Thermal Analysis, Grey-body viewfactors calculated using Markov method, Particle distribution modeled using an iterative Monte Carlo process, (time-consuming); in house software called MASTRAM, Differential analysis performed in Excel, and Visualization provided by Tecplot and I-DEAS. Several tests were performed and are reviewed: Conformal Coat Particle Study, Card Extraction Study, Cover Fastener Removal Particle Generation Study, and E-Graf Vibration Particulate Study. The lessons learned during this analysis are also reviewed.
NASA Astrophysics Data System (ADS)
Ježek, I.; Drinovec, L.; Ferrero, L.; Carriero, M.; Močnik, G.
2015-01-01
We have used two methods for measuring emission factors (EFs) in real driving conditions on five cars in a controlled environment: the stationary method, where the investigated vehicle drives by the stationary measurement platform and the composition of the plume is measured, and the chasing method, where a mobile measurement platform drives behind the investigated vehicle. We measured EFs of black carbon and particle number concentration. The stationary method was tested for repeatability at different speeds and on a slope. The chasing method was tested on a test track and compared to the portable emission measurement system. We further developed the data processing algorithm for both methods, trying to improve consistency, determine the plume duration, limit the background influence and facilitate automatic processing of measurements. The comparison of emission factors determined by the two methods showed good agreement. EFs of a single car measured with either method have a specific distribution with a characteristic value and a long tail of super emissions. Measuring EFs at different speeds or slopes did not significantly influence the EFs of different cars; hence, we propose a new description of vehicle emissions that is not related to kinematic or engine parameters, and we rather describe the vehicle EF with a characteristic value and a super emission tail.
Novel method for on-road emission factor measurements using a plume capture trailer.
Morawska, L; Ristovski, Z D; Johnson, G R; Jayaratne, E R; Mengersen, K
2007-01-15
The method outlined provides for emission factor measurements to be made for unmodified vehicles driving under real world conditions at minimal cost. The method consists of a plume capture trailer towed behind a test vehicle. The trailer collects a sample of the naturally diluted plume in a 200 L conductive bag and this is delivered immediately to a mobile laboratory for subsequent analysis of particulate and gaseous emissions. The method offers low test turnaround times with the potential to complete much larger numbers of emission factor measurements than have been possible using dynamometer testing. Samples can be collected at distances up to 3 m from the exhaust pipe allowing investigation of early dilution processes. Particle size distribution measurements, as well as particle number and mass emission factor measurements, based on naturally diluted plumes are presented. A dilution profile relating the plume dilution ratio to distance from the vehicle tail pipe for a diesel passenger vehicle is also presented. Such profiles are an essential input for new mechanistic roadway air quality models.
Generation of segmental chips in metal cutting modeled with the PFEM
NASA Astrophysics Data System (ADS)
Rodriguez Prieto, J. M.; Carbonell, J. M.; Cante, J. C.; Oliver, J.; Jonsén, P.
2018-06-01
The Particle Finite Element Method, a lagrangian finite element method based on a continuous Delaunay re-triangulation of the domain, is used to study machining of Ti6Al4V. In this work the method is revised and applied to study the influence of the cutting speed on the cutting force and the chip formation process. A parametric methodology for the detection and treatment of the rigid tool contact is presented. The adaptive insertion and removal of particles are developed and employed in order to sidestep the difficulties associated with mesh distortion, shear localization as well as for resolving the fine-scale features of the solution. The performance of PFEM is studied with a set of different two-dimensional orthogonal cutting tests. It is shown that, despite its Lagrangian nature, the proposed combined finite element-particle method is well suited for large deformation metal cutting problems with continuous chip and serrated chip formation.
Generation of segmental chips in metal cutting modeled with the PFEM
NASA Astrophysics Data System (ADS)
Rodriguez Prieto, J. M.; Carbonell, J. M.; Cante, J. C.; Oliver, J.; Jonsén, P.
2017-09-01
The Particle Finite Element Method, a lagrangian finite element method based on a continuous Delaunay re-triangulation of the domain, is used to study machining of Ti6Al4V. In this work the method is revised and applied to study the influence of the cutting speed on the cutting force and the chip formation process. A parametric methodology for the detection and treatment of the rigid tool contact is presented. The adaptive insertion and removal of particles are developed and employed in order to sidestep the difficulties associated with mesh distortion, shear localization as well as for resolving the fine-scale features of the solution. The performance of PFEM is studied with a set of different two-dimensional orthogonal cutting tests. It is shown that, despite its Lagrangian nature, the proposed combined finite element-particle method is well suited for large deformation metal cutting problems with continuous chip and serrated chip formation.
Isik, Nimet
2016-04-01
Multi-element electrostatic aperture lens systems are widely used to control electron or charged particle beams in many scientific instruments. By means of applied voltages, these lens systems can be operated for different purposes. In this context, numerous methods have been performed to calculate focal properties of these lenses. In this study, an artificial neural network (ANN) classification method is utilized to determine the focused/unfocused charged particle beam in the image point as a function of lens voltages for multi-element electrostatic aperture lenses. A data set for training and testing of ANN is taken from the SIMION 8.1 simulation program, which is a well known and proven accuracy program in charged particle optics. Mean squared error results of this study indicate that the ANN classification method provides notable performance characteristics for electrostatic aperture zoom lenses.
Laser-plasma interactions with a Fourier-Bessel particle-in-cell method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andriyash, Igor A., E-mail: igor.andriyash@gmail.com; LOA, ENSTA ParisTech, CNRS, Ecole polytechnique, Université Paris-Saclay, 828 bd des Maréchaux, 91762 Palaiseau cedex; Lehe, Remi
A new spectral particle-in-cell (PIC) method for plasma modeling is presented and discussed. In the proposed scheme, the Fourier-Bessel transform is used to translate the Maxwell equations to the quasi-cylindrical spectral domain. In this domain, the equations are solved analytically in time, and the spatial derivatives are approximated with high accuracy. In contrast to the finite-difference time domain (FDTD) methods, that are used commonly in PIC, the developed method does not produce numerical dispersion and does not involve grid staggering for the electric and magnetic fields. These features are especially valuable in modeling the wakefield acceleration of particles in plasmas.more » The proposed algorithm is implemented in the code PLARES-PIC, and the test simulations of laser plasma interactions are compared to the ones done with the quasi-cylindrical FDTD PIC code CALDER-CIRC.« less
Czajkowski, Robert; Ozymko, Zofia; Lojkowska, Ewa
2016-01-01
This is the first report describing precipitation of bacteriophage particles with zinc chloride as a method of choice to isolate infectious lytic bacteriophages against Pectobacterium spp. and Dickeya spp. from environmental samples. The isolated bacteriophages are ready to use to study various (ecological) aspects of bacteria-bacteriophage interactions. The method comprises the well-known precipitation of phages from aqueous extracts of the test material by addition of ZnCl2, resuscitation of bacteriophage particles in Ringer's buffer to remove the ZnCl2 excess and a soft agar overlay assay with the host bacterium to isolate infectious individual phage plaques. The method requires neither an enrichment step nor other steps (e. g., PEG precipitation, ultrafiltration, or ultracentrifugation) commonly used in other procedures and results in isolation of active viable bacteriophage particles.
Passive particle dosimetry. [silver halide crystal growth
NASA Technical Reports Server (NTRS)
Childs, C. B.
1977-01-01
Present methods of dosimetry are reviewed with emphasis on the processes using silver chloride crystals for ionizing particle dosimetry. Differences between the ability of various crystals to record ionizing particle paths are directly related to impurities in the range of a few ppm (parts per million). To understand the roles of these impurities in the process, a method for consistent production of high purity silver chloride, and silver bromide was developed which yields silver halides with detectable impurity content less than 1 ppm. This high purity silver chloride was used in growing crystals with controlled doping. Crystals were grown by both the Czochalski method and the Bridgman method, and the Bridgman grown crystals were used for the experiments discussed. The distribution coefficients of ten divalent cations were determined for the Bridgman crystals. The best dosimeters were made with silver chloride crystals containing 5 to 10 ppm of lead; other impurities tested did not produce proper dosimeters.
NASA Astrophysics Data System (ADS)
Furuichi, M.; Nishiura, D.
2015-12-01
Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our approach is suitable for solving the particles with different calculation costs (e.g. boundary particles) as well as the heterogeneous computer architecture. We analyze the parallel efficiency and scalability on the super computer systems (K-computer, Earth simulator 3, etc.).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller, Kathrin, E-mail: k.mueller@fz-juelich.de; Fedosov, Dmitry A., E-mail: d.fedosov@fz-juelich.de; Gompper, Gerhard, E-mail: g.gompper@fz-juelich.de
Smoothed dissipative particle dynamics (SDPD) combines two popular mesoscopic techniques, the smoothed particle hydrodynamics and dissipative particle dynamics (DPD) methods, and can be considered as an improved dissipative particle dynamics approach. Despite several advantages of the SDPD method over the conventional DPD model, the original formulation of SDPD by Español and Revenga (2003) [9], lacks angular momentum conservation, leading to unphysical results for problems where the conservation of angular momentum is essential. To overcome this limitation, we extend the SDPD method by introducing a particle spin variable such that local and global angular momentum conservation is restored. The new SDPDmore » formulation (SDPD+a) is directly derived from the Navier–Stokes equation for fluids with spin, while thermal fluctuations are incorporated similarly to the DPD method. We test the new SDPD method and demonstrate that it properly reproduces fluid transport coefficients. Also, SDPD with angular momentum conservation is validated using two problems: (i) the Taylor–Couette flow with two immiscible fluids and (ii) a tank-treading vesicle in shear flow with a viscosity contrast between inner and outer fluids. For both problems, the new SDPD method leads to simulation predictions in agreement with the corresponding analytical theories, while the original SDPD method fails to capture properly physical characteristics of the systems due to violation of angular momentum conservation. In conclusion, the extended SDPD method with angular momentum conservation provides a new approach to tackle fluid problems such as multiphase flows and vesicle/cell suspensions, where the conservation of angular momentum is essential.« less
A manual and an automatic TERS based virus discrimination
NASA Astrophysics Data System (ADS)
Olschewski, Konstanze; Kämmer, Evelyn; Stöckel, Stephan; Bocklitz, Thomas; Deckert-Gaudig, Tanja; Zell, Roland; Cialla-May, Dana; Weber, Karina; Deckert, Volker; Popp, Jürgen
2015-02-01
Rapid techniques for virus identification are more relevant today than ever. Conventional virus detection and identification strategies generally rest upon various microbiological methods and genomic approaches, which are not suited for the analysis of single virus particles. In contrast, the highly sensitive spectroscopic technique tip-enhanced Raman spectroscopy (TERS) allows the characterisation of biological nano-structures like virions on a single-particle level. In this study, the feasibility of TERS in combination with chemometrics to discriminate two pathogenic viruses, Varicella-zoster virus (VZV) and Porcine teschovirus (PTV), was investigated. In a first step, chemometric methods transformed the spectral data in such a way that a rapid visual discrimination of the two examined viruses was enabled. In a further step, these methods were utilised to perform an automatic quality rating of the measured spectra. Spectra that passed this test were eventually used to calculate a classification model, through which a successful discrimination of the two viral species based on TERS spectra of single virus particles was also realised with a classification accuracy of 91%.Rapid techniques for virus identification are more relevant today than ever. Conventional virus detection and identification strategies generally rest upon various microbiological methods and genomic approaches, which are not suited for the analysis of single virus particles. In contrast, the highly sensitive spectroscopic technique tip-enhanced Raman spectroscopy (TERS) allows the characterisation of biological nano-structures like virions on a single-particle level. In this study, the feasibility of TERS in combination with chemometrics to discriminate two pathogenic viruses, Varicella-zoster virus (VZV) and Porcine teschovirus (PTV), was investigated. In a first step, chemometric methods transformed the spectral data in such a way that a rapid visual discrimination of the two examined viruses was enabled. In a further step, these methods were utilised to perform an automatic quality rating of the measured spectra. Spectra that passed this test were eventually used to calculate a classification model, through which a successful discrimination of the two viral species based on TERS spectra of single virus particles was also realised with a classification accuracy of 91%. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr07033j
Biomarker detection of global infectious diseases based on magnetic particles.
Carinelli, Soledad; Martí, Mercè; Alegret, Salvador; Pividori, María Isabel
2015-09-25
Infectious diseases affect the daily lives of millions of people all around the world, and are responsible for hundreds of thousands of deaths, mostly in the developing world. Although most of these major infectious diseases are treatable, the early identification of individuals requiring treatment remains a major issue. The incidence of these diseases would be reduced if rapid diagnostic tests were widely available at the community and primary care level in low-resource settings. Strong research efforts are thus being focused on replacing standard clinical diagnostic methods, such as the invasive detection techniques (biopsy or endoscopy) or expensive diagnostic and monitoring methods, by affordable and sensitive tests based on novel biomarkers. The development of new methods that are needed includes solid-phase separation techniques. In this context, the integration of magnetic particles within bioassays and biosensing devices is very promising since they greatly improve the performance of a biological reaction. The diagnosis of clinical samples with magnetic particles can be easily achieved without pre-enrichment, purification or pretreatment steps often required for standard methods, simplifying the analytical procedures. The biomarkers can be specifically isolated and preconcentrated from complex biological matrixes by magnetic actuation, increasing specificity and the sensitivity of the assay. This review addresses these promising features of the magnetic particles for the detection of biomarkers in emerging technologies related with infectious diseases affecting global health, such as malaria, influenza, dengue, tuberculosis or HIV. Copyright © 2015 Elsevier B.V. All rights reserved.
Flight prototype regenerative particulate filter system development
NASA Technical Reports Server (NTRS)
Green, D. C.; Garber, P. J.
1974-01-01
The effort to design, fabricate, and test a flight prototype Filter Regeneration Unit used to regenerate (clean) fluid particulate filter elements is reported. The design of the filter regeneration unit and the results of tests performed in both one-gravity and zero-gravity are discussed. The filter regeneration unit uses a backflush/jet impingement method of regenerating fluid filter elements that is highly efficient. A vortex particle separator and particle trap were designed for zero-gravity use, and the zero-gravity test results are discussed. The filter regeneration unit was designed for both inflight maintenance and ground refurbishment use on space shuttle and future space missions.
Design of a Uranium Dioxide Spheroidization System
NASA Technical Reports Server (NTRS)
Cavender, Daniel P.; Mireles, Omar R.; Frendi, Abdelkader
2013-01-01
The plasma spheroidization system (PSS) is the first process in the development of tungsten-uranium dioxide (W-UO2) fuel cermets. The PSS process improves particle spherocity and surface morphology for coating by chemical vapor deposition (CVD) process. Angular fully dense particles melt in an argon-hydrogen plasma jet at between 32-36 kW, and become spherical due to surface tension. Surrogate CeO2 powder was used in place of UO2 for system and process parameter development. Particles range in size from 100 - 50 microns in diameter. Student s t-test and hypothesis testing of two proportions statistical methods were applied to characterize and compare the spherocity of pre and post process powders. Particle spherocity was determined by irregularity parameter. Processed powders show great than 800% increase in the number of spherical particles over the stock powder with the mean spherocity only mildly improved. It is recommended that powders be processed two-three times in order to reach the desired spherocity, and that process parameters be optimized for a more narrow particles size range. Keywords: spherocity, spheroidization, plasma, uranium-dioxide, cermet, nuclear, propulsion
Performance Test of Laser Velocimeter System for the Langley 16-foot Transonic Tunnel
NASA Technical Reports Server (NTRS)
Meyers, J. F.; Hunter, W. W., Jr.; Reubush, D. E.; Nichols, C. E., Jr.; Hepner, T. E.; Lee, J. W.
1985-01-01
An investigation in the Langley 16-Foot Transonic Tunnel has been conducted in which a laser velocimeter was used to measure free-stream velocities from Mach 0.1 to 1.0 and the flow velocities along the stagnating streamline of a hemisphere-cylinder model at Mach 0.8 and 1.0. The flow velocity was also measured at Mach 1.0 along the line 0.533 model diameters below the model. These tests determined the performance characteristics of the dedicated two-component laser velocimeter at flow velocities up to Mach 1.0 and the effects of the wind tunnel environment on the particle-generating system and on the resulting size of the generated particles. To determine these characteristics, the measured particle velocities along the stagnating streamline at the two Mach numbers were compared with the theoretically predicted gas and particle velocities calculated using a transonic potential flow method. Through this comparison the mean detectable particle size (2.1 micron) along with the standard deviation of the detectable particles (0.76 micron) was determined; thus the performance characteristics of the laser velocimeter were established.
One-step synthesis of bioactive glass by spray pyrolysis
NASA Astrophysics Data System (ADS)
Shih, Shao-Ju; Chou, Yu-Jen; Chien, I.-Chen
2012-12-01
Bioactive glasses (BGs) have recently received more attention from biologists and engineers because of their potential applications in bone implants. The sol-gel process is one of the most popular methods for fabricating BGs, and has been used to produce BGs for years. However, the sol-gel process has the disadvantages of discontinuous processing and a long processing time. This study presented a one-step spray pyrolysis (SP) synthesis method to overcome these disadvantages. This SP method has synthesized spherical bioactive glass (SBG) and mesoporous bioactive glass (MBG) particles using Si-, Ca- and P-based precursors. This study used transmission electron microscopy, selected area electron diffraction and X-ray dispersive spectroscopy to characterize the microstructure, crystallographic structure, and chemical composition for the BG particles. In addition, in vitro bioactive tests showed the formation of hydroxyl apatite layers on SBG and MBG particles after immersion in simulated body fluid for 5 h. Experimental results show the SP formation mechanisms of SBG and MBG particles.
Boundary based on exchange symmetry theory for multilevel simulations. I. Basic theory.
Shiga, Motoyuki; Masia, Marco
2013-07-28
In this paper, we lay the foundations for a new method that allows multilevel simulations of a diffusive system, i.e., a system where a flux of particles through the boundaries might disrupt the primary region. The method is based on the use of flexible restraints that maintain the separation between inner and outer particles. It is shown that, by introducing a bias potential that accounts for the exchange symmetry of the system, the correct statistical distribution is preserved. Using a toy model consisting of non-interacting particles in an asymmetric potential well, we prove that the method is formally exact, and that it could be simplified by considering only up to a couple of particle exchanges without a loss of accuracy. A real-world test is then made by considering a hybrid MM(∗)/MM calculation of cesium ion in water. In this case, the single exchange approximation is sound enough that the results superimpose to the exact solutions. Potential applications of this method to many different hybrid QM/MM systems are discussed, as well as its limitations and strengths in comparison to existing approaches.
Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.
2011-01-01
Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185
Christopher, David; Adams, Wallace P; Lee, Douglas S; Morgan, Beth; Pan, Ziqing; Singh, Gur Jai Pal; Tsong, Yi; Lyapustina, Svetlana
2007-01-19
The purpose of this article is to present the thought process, methods, and interim results of a PQRI Working Group, which was charged with evaluating the chi-square ratio test as a potential method for determining in vitro equivalence of aerodynamic particle size distribution (APSD) profiles obtained from cascade impactor measurements. Because this test was designed with the intention of being used as a tool in regulatory review of drug applications, the capability of the test to detect differences in APSD profiles correctly and consistently was evaluated in a systematic way across a designed space of possible profiles. To establish a "base line," properties of the test in the simplest case of pairs of identical profiles were studied. Next, the test's performance was studied with pairs of profiles, where some difference was simulated in a systematic way on a single deposition site using realistic product profiles. The results obtained in these studies, which are presented in detail here, suggest that the chi-square ratio test in itself is not sufficient to determine equivalence of particle size distributions. This article, therefore, introduces the proposal to combine the chi-square ratio test with a test for impactor-sized mass based on Population Bioequivalence and describes methods for evaluating discrimination capabilities of the combined test. The approaches and results described in this article elucidate some of the capabilities and limitations of the original chi-square ratio test and provide rationale for development of additional tests capable of comparing APSD profiles of pharmaceutical aerosols.
Test methods for determining the suitability of metal alloys for use in oxygen-enriched environments
NASA Technical Reports Server (NTRS)
Stoltzfus, Joel M.; Gunaji, Mohan V.
1991-01-01
Materials are more flammable in oxygen rich environments than in air. When the structural elements of a system containing oxygen ignite and burn, the results are often catastrophic, causing loss of equipment and perhaps even human lives. Therefore, selection of the proper metallic and non-metallic materials for use in oxygen systems is extremely important. While test methods for the selection of non-metallic materials have been available for years, test methods for the selection of alloys have not been available until recently. Presented here are several test methods that were developed recently at NASA's White Sands Test Facility (WSTF) to study the ignition and combustion of alloys, including the supersonic and subsonic speed particle impact tests, the frictional heating and coefficient of friction tests, and the promoted combustion test. These test methods are available for commercial use.
Han, Xue; Zhang, Ding-Kun; Zhang, Fang; Lin, Jun-Zhi; Jiang, Hong; Lan, Yang; Xiong, Xi; Han, Li; Yang, Ming; Fu, Chao-Mei
2017-01-01
Currently, acute upper respiratory tract infections (AURTIs) are increasingly becoming a significant health burden. Gankeshuangqing dispersible tablets (GKSQDT) which have a good effect on treating AURTIs. GKSQDT is composed of baicalin and andrographolide. However, its severe bitterness limits application of patients. Due to the addition of plentiful accessories, common masking methods are unsuitable for GKSQDT. It is thus necessary to develop a new masking method. The Previous study showed that baicalin was less bitter than andrographolide. Thus, particle coating technology was adapted to prepare composite particles that baicalin coated on the surface of andrographolide to decrease bitterness. Initially, particle size of baicalin and coating time of composite was investigated to prepare composite. Then, scanning electron microscopy, wettability, and infrared (IR) spectrogram were used to characterize the microstructure of composite. Furthermore, electronic tongue test, animal preference experiment, and human sensory test were applied to evaluate the masking effect. To produce composite, baicalin should be ground in vibromill for 6 min. Then, andrographolide fine powder was added to grind together for 6 min. Contact angle of composite was smaller than mixture, and more similar to baicalin. Other physical characterization including microstructure, wettability, and IR also suggested that andrographolide was successfully coated by baicalin superfine. Furthermore, taste-masking test indicated taste-masked tablets was less bitter than original tablets. The study indicated that particle coating technology can be used for taste masking of GKSQDT without adding other substance. Moreover, it provides a new strategy of taste masking for national medicine. A new strategy to mask bitterness without adding any other substance based on coating technology was providedThe masking effect was confirmed by electronic tongue test, animal preference experiment and human sensory test. Abbreviations used: AURTIs: Acute Upper Respiratory Tract Infections; GSQDT: Gankeshuangqing Dispersible Tablets; IR: Infrared Spectrogram; LHPC: Low-substituted Hydroxypropyl Cellulose; CAs: Contact Angles; FTIR: Fourier Transform Infrared Spectra.
Magnetic separation of carbon-encapsulated Fe nanoparticles from thermally-treated wood char
Sung Phil Mun; Zhiyong Cai; Jilei Zhang
2013-01-01
Wood char,a by-product from the fast-pyrolysis process of southern yellow pine wood for bio-oil production, was carbonized with Fenano particles (FeNPs) as a catalyst to prepare carbon-encapsulated Fe nanoparticles. A magnetic separation method was tested to isolate carbon-encapsulated Fe nano particles from the carbonized char. X-ray diffraction pattern clearly shows...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heaney, Libby; Jaksch, Dieter; Centre for Quantum Technologies, National University of Singapore
Proposals for Bell-inequality tests on systems restricted by the particle-number-superselection rule often require operations that are difficult to implement in practice. In this article, we derive a Bell inequality, where measurements on pairs of states are used as a method to bypass this superselection rule. In particular, we focus on mode entanglement of an arbitrary number of massive particles and show that our Bell inequality detects the entanglement in an identical pair of states when other inequalities fail. However, as the number of particles in the system increases, the violation of our Bell inequality decreases due to the restriction inmore » the measurement space caused by the superselection rule. This Bell test can be implemented using techniques that are routinely used in current experiments.« less
NASA Astrophysics Data System (ADS)
He, Yaoyao; Yang, Shanlin; Xu, Qifa
2013-07-01
In order to solve the model of short-term cascaded hydroelectric system scheduling, a novel chaotic particle swarm optimization (CPSO) algorithm using improved logistic map is introduced, which uses the water discharge as the decision variables combined with the death penalty function. According to the principle of maximum power generation, the proposed approach makes use of the ergodicity, symmetry and stochastic property of improved logistic chaotic map for enhancing the performance of particle swarm optimization (PSO) algorithm. The new hybrid method has been examined and tested on two test functions and a practical cascaded hydroelectric system. The experimental results show that the effectiveness and robustness of the proposed CPSO algorithm in comparison with other traditional algorithms.
Hybrid dynamic radioactive particle tracking (RPT) calibration technique for multiphase flow systems
NASA Astrophysics Data System (ADS)
Khane, Vaibhav; Al-Dahhan, Muthanna H.
2017-04-01
The radioactive particle tracking (RPT) technique has been utilized to measure three-dimensional hydrodynamic parameters for multiphase flow systems. An analytical solution to the inverse problem of the RPT technique, i.e. finding the instantaneous tracer positions based upon instantaneous counts received in the detectors, is not possible. Therefore, a calibration to obtain a counts-distance map is needed. There are major shortcomings in the conventional RPT calibration method due to which it has limited applicability in practical applications. In this work, the design and development of a novel dynamic RPT calibration technique are carried out to overcome the shortcomings of the conventional RPT calibration method. The dynamic RPT calibration technique has been implemented around a test reactor with 1foot in diameter and 1 foot in height using Cobalt-60 as an isotopes tracer particle. Two sets of experiments have been carried out to test the capability of novel dynamic RPT calibration. In the first set of experiments, a manual calibration apparatus has been used to hold a tracer particle at known static locations. In the second set of experiments, the tracer particle was moved vertically downwards along a straight line path in a controlled manner. The obtained reconstruction results about the tracer particle position were compared with the actual known position and the reconstruction errors were estimated. The obtained results revealed that the dynamic RPT calibration technique is capable of identifying tracer particle positions with a reconstruction error between 1 to 5.9 mm for the conditions studied which could be improved depending on various factors outlined here.
Airborne particles released by crushing CNT composites
NASA Astrophysics Data System (ADS)
Ogura, I.; Okayama, C.; Kotake, M.; Ata, S.; Matsui, Y.; Gotoh, K.
2017-06-01
We investigated airborne particles released as a result of crushing carbon nanotube (CNT) composites using a laboratory scale crusher with rotor blades. For each crushing test, five pellets (approximately 0.1 g) of a polymer (polystyrene, polyamide, or polycarbonate) containing multiwall CNTs (Nanocyl NC7000 or CNano Flotube9000) or no CNTs were placed in the container of the crusher. The airborne particles released by the crushing of the samples were measured. The real-time aerosol measurements showed increases in the concentration of nanometer- and micrometer-sized particles, regardless of the sample type, even when CNT-free polymers were crushed. The masses of the airborne particles collected on filters were below the detection limit, which indicated that the mass ratios of the airborne particles to the crushed pellets were lower than 0.02%. In the electron microscopic analysis, particles with protruding CNTs were observed. However, free-standing CNTs were not found, except for a poorly dispersed CNT-polystyrene composite. This study demonstrated that the crushing test using a laboratory scale crusher is capable of evaluating the potential release of CNTs as a result of crushing CNT composites. The advantage of this method is that only a small amount of sample (several pieces of pellets) is required.
Li, Desheng
2014-01-01
This paper proposes a novel variant of cooperative quantum-behaved particle swarm optimization (CQPSO) algorithm with two mechanisms to reduce the search space and avoid the stagnation, called CQPSO-DVSA-LFD. One mechanism is called Dynamic Varying Search Area (DVSA), which takes charge of limiting the ranges of particles' activity into a reduced area. On the other hand, in order to escape the local optima, Lévy flights are used to generate the stochastic disturbance in the movement of particles. To test the performance of CQPSO-DVSA-LFD, numerical experiments are conducted to compare the proposed algorithm with different variants of PSO. According to the experimental results, the proposed method performs better than other variants of PSO on both benchmark test functions and the combinatorial optimization issue, that is, the job-shop scheduling problem. PMID:24851085
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamberaj, Hiqmet, E-mail: hkamberaj@ibu.edu.mk
In this paper, we present a new method based on swarm particle social intelligence for use in replica exchange molecular dynamics simulations. In this method, the replicas (representing the different system configurations) are allowed communicating with each other through the individual and social knowledge, in additional to considering them as a collection of real particles interacting through the Newtonian forces. The new method is based on the modification of the equations of motion in such way that the replicas are driven towards the global energy minimum. The method was tested for the Lennard-Jones clusters of N = 4, 5, andmore » 6 atoms. Our results showed that the new method is more efficient than the conventional replica exchange method under the same practical conditions. In particular, the new method performed better on optimizing the distribution of the replicas among the thermostats with time and, in addition, ergodic convergence is observed to be faster. We also introduce a weighted histogram analysis method allowing analyzing the data from simulations by combining data from all of the replicas and rigorously removing the inserted bias.« less
NASA Astrophysics Data System (ADS)
Kamberaj, Hiqmet
2015-09-01
In this paper, we present a new method based on swarm particle social intelligence for use in replica exchange molecular dynamics simulations. In this method, the replicas (representing the different system configurations) are allowed communicating with each other through the individual and social knowledge, in additional to considering them as a collection of real particles interacting through the Newtonian forces. The new method is based on the modification of the equations of motion in such way that the replicas are driven towards the global energy minimum. The method was tested for the Lennard-Jones clusters of N = 4, 5, and 6 atoms. Our results showed that the new method is more efficient than the conventional replica exchange method under the same practical conditions. In particular, the new method performed better on optimizing the distribution of the replicas among the thermostats with time and, in addition, ergodic convergence is observed to be faster. We also introduce a weighted histogram analysis method allowing analyzing the data from simulations by combining data from all of the replicas and rigorously removing the inserted bias.
Determination of MIL-H-6083 Hydraulic Fluid In-Service Use Limits for Self-Propelled Artillery
1991-09-01
determined using the American Society for Testing and Materials (ASTM) D1744 Karl Fischer Reagent method . The specification limit is 0.05% (500 pans per...cazefully controlled. TOTAL ACID NUMBER The acid number was determined by the ASTM D664 potentiometric titration test method . Unfortunately, data were...fluid condition t results with AOAP tent date was found. The Navy Patch Kit method for particle contamination meamrement was evaluated as a possible
Percent area coverage through image analysis
NASA Astrophysics Data System (ADS)
Wong, Chung M.; Hong, Sung M.; Liu, De-Ling
2016-09-01
The notion of percent area coverage (PAC) has been used to characterize surface cleanliness levels in the spacecraft contamination control community. Due to the lack of detailed particle data, PAC has been conventionally calculated by multiplying the particle surface density in predetermined particle size bins by a set of coefficients per MIL-STD-1246C. In deriving the set of coefficients, the surface particle size distribution is assumed to follow a log-normal relation between particle density and particle size, while the cross-sectional area function is given as a combination of regular geometric shapes. For particles with irregular shapes, the cross-sectional area function cannot describe the true particle area and, therefore, may introduce error in the PAC calculation. Other errors may also be introduced by using the lognormal surface particle size distribution function that highly depends on the environmental cleanliness and cleaning process. In this paper, we present PAC measurements from silicon witness wafers that collected fallouts from a fabric material after vibration testing. PAC calculations were performed through analysis of microscope images and compare them to values derived through the MIL-STD-1246C method. Our results showed that the MIL-STD-1246C method does provide a reasonable upper bound to the PAC values determined through image analysis, in particular for PAC values below 0.1.
NASA Astrophysics Data System (ADS)
Yoon, Kyung-Beom; Park, Won-Hee
2015-04-01
The convective heat transfer coefficient and surface emissivity before and after flame occurrence on a wood specimen surface and the flame heat flux were estimated using the repulsive particle swarm optimization algorithm and cone heater test results. The cone heater specified in the ISO 5660 standards was used, and six cone heater heat fluxes were tested. Preservative-treated Douglas fir 21 mm in thickness was used as the wood specimen in the tests. This study confirmed that the surface temperature of the specimen, which was calculated using the convective heat transfer coefficient, surface emissivity and flame heat flux on the wood specimen by a repulsive particle swarm optimization algorithm, was consistent with the measured temperature. Considering the measurement errors in the surface temperature of the specimen, the applicability of the optimization method considered in this study was evaluated.
Test Particle Stability in Exoplanet Systems
NASA Astrophysics Data System (ADS)
Frewen, Shane; Hansen, B. M.
2011-01-01
Astronomy is currently going through a golden age of exoplanet discovery. Yet despite that, there is limited research on the evolution of exoplanet systems driven by stellar evolution. In this work we look at the stability of test particles in known exoplanet systems during the host star's main sequence and white dwarf stages. In particular, we compare the instability regions that develop before and after the star loses mass to form a white dwarf, a process which causes the semi-major axes of the outer planets to expand adiabatically. We investigate the possibility of secular and resonant perturbations resulting in these regions as well as the method of removal of test particles for the instability regions, such as ejection and collision with the central star. To run our simulations we used the MERCURY software package (Chambers, 1999) and evolved our systems for over 108 years using a hybrid symplectic/Bulirsch-Stoer integrator.
Dispersion of Co/CNTs via strong electrostatic adsorption method: Thermal treatment effect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akbarzadeh, Omid, E-mail: omid.akbarzadeh63@gmail.com; Abdullah, Bawadi, E-mail: bawadi-abdullah@petronas.com.my; Subbarao, Duvvuri, E-mail: duvvuri-subbarao@petronas.com.my
The effect of different thermal treatment temperature on the structure of multi-walled carbon nanotubes (MWCNTs) and Co particle dispersion on CNTs support is studied using Strong electrostatic adsorption (SEA) method. The samples tested by N{sub 2}-adsorption, field emission scanning electron microscopy (FE-SEM) and transmission electron microscopy (TEM). N{sub 2}-adsorption results showed BET surface area increased using thermal treatment and TEM images showed that increasing the thermal treatment temperature lead to flaky CNTs and defects introduced on the outer surface and Co particle dispersion increased.
Dobson, Ruaraidh; Semple, Sean
2018-06-18
Second-hand smoke (SHS) at home is a target for public health interventions, such as air quality feedback interventions using low-cost particle monitors. However, these monitors also detect fine particles generated from non-SHS sources. The Dylos DC1700 reports particle counts in the coarse and fine size ranges. As tobacco smoke produces far more fine particles than coarse ones, and tobacco is generally the greatest source of particulate pollution in a smoking home, the ratio of coarse to fine particles may provide a useful method to identify the presence of SHS in homes. An algorithm was developed to differentiate smoking from smoke-free homes. Particle concentration data from 116 smoking homes and 25 non-smoking homes were used to test this algorithm. The algorithm correctly classified the smoking status of 135 of the 141 homes (96%), comparing favourably with a test of mean mass concentration. Applying this algorithm to Dylos particle count measurements may help identify the presence of SHS in homes or other indoor environments. Future research should adapt it to detect individual smoking periods within a 24 h or longer measurement period. Copyright © 2018 Elsevier Inc. All rights reserved.
Jung, Jae Hee; Lee, Jung Eun; Bae, Gwi Nam
2011-08-01
The ultraviolet aerodynamic particle sizer (UVAPS) is a novel commercially available aerosol spectrometer for real-time continuous monitoring of viable bioaerosols, based on fluorescence from living microorganisms. In a previous study, we developed an electrospray-assisted UVAPS using biological electrospray techniques, which have the advantage of generating non-agglomerated single particles by the repulsive electrical forces. With this electrospraying of suspensions containing microorganisms, the analytical system can supply more accurate and quantitative information about living microorganisms than with conventional aerosolization. Using electrospray-assisted UVAPS, we investigated the characteristics of bacterial particles with various viabilities in real-time. Escherichia coli was used as the test microorganism, and its initial viability was controlled by the degree of exposure to UV irradiation. In the stable cone-jet domain, the particle size distributions of test bacterial particles remained almost uniform regardless of the degree of UV inactivation. However, the fluorescence spectra of the bacterial particles changed with the degree of UV inactivation. The fluorescence characteristics of UV-inactivated bacterial particles tended to show a similar decline with viability, determined by the sampling and culture method, although the percentage showing fluorescence was higher than that showing viability. Copyright © 2011 Elsevier B.V. All rights reserved.
Sleeth, Darrah K; Balthaser, Susan A; Collingwood, Scott; Larson, Rodney R
2016-03-07
Extrathoracic deposition of inhaled particles (i.e., in the head and throat) is an important exposure route for many hazardous materials. Current best practices for exposure assessment of aerosols in the workplace involve particle size selective sampling methods based on particle penetration into the human respiratory tract (i.e., inhalable or respirable sampling). However, the International Organization for Standardization (ISO) has recently adopted particle deposition sampling conventions (ISO 13138), including conventions for extrathoracic (ET) deposition into the anterior nasal passage (ET₁) and the posterior nasal and oral passages (ET₂). For this study, polyurethane foam was used as a collection substrate inside an inhalable aerosol sampler to provide an estimate of extrathoracic particle deposition. Aerosols of fused aluminum oxide (five sizes, 4.9 µm-44.3 µm) were used as a test dust in a low speed (0.2 m/s) wind tunnel. Samplers were placed on a rotating mannequin inside the wind tunnel to simulate orientation-averaged personal sampling. Collection efficiency data for the foam insert matched well to the extrathoracic deposition convention for the particle sizes tested. The concept of using a foam insert to match a particle deposition sampling convention was explored in this study and shows promise for future use as a sampling device.
Sleeth, Darrah K.; Balthaser, Susan A.; Collingwood, Scott; Larson, Rodney R.
2016-01-01
Extrathoracic deposition of inhaled particles (i.e., in the head and throat) is an important exposure route for many hazardous materials. Current best practices for exposure assessment of aerosols in the workplace involve particle size selective sampling methods based on particle penetration into the human respiratory tract (i.e., inhalable or respirable sampling). However, the International Organization for Standardization (ISO) has recently adopted particle deposition sampling conventions (ISO 13138), including conventions for extrathoracic (ET) deposition into the anterior nasal passage (ET1) and the posterior nasal and oral passages (ET2). For this study, polyurethane foam was used as a collection substrate inside an inhalable aerosol sampler to provide an estimate of extrathoracic particle deposition. Aerosols of fused aluminum oxide (five sizes, 4.9 µm–44.3 µm) were used as a test dust in a low speed (0.2 m/s) wind tunnel. Samplers were placed on a rotating mannequin inside the wind tunnel to simulate orientation-averaged personal sampling. Collection efficiency data for the foam insert matched well to the extrathoracic deposition convention for the particle sizes tested. The concept of using a foam insert to match a particle deposition sampling convention was explored in this study and shows promise for future use as a sampling device. PMID:26959046
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S.Y.; Tepikian, S.
1985-01-01
Nonlinear magnetic forces become more important for particles in the modern large accelerators. These nonlinear elements are introduced either intentionally to control beam dynamics or by uncontrollable random errors. Equations of motion in the nonlinear Hamiltonian are usually non-integrable. Because of the nonlinear part of the Hamiltonian, the tune diagram of accelerators is a jungle. Nonlinear magnet multipoles are important in keeping the accelerator operation point in the safe quarter of the hostile jungle of resonant tunes. Indeed, all the modern accelerator designs have taken advantages of nonlinear mechanics. On the other hand, the effect of the uncontrollable random multipolesmore » should be evaluated carefully. A powerful method of studying the effect of these nonlinear multipoles is using a particle tracking calculation, where a group of test particles are tracing through these magnetic multipoles in the accelerator hundreds to millions of turns in order to test the dynamical aperture of the machine. These methods are extremely useful in the design of a large accelerator such as SSC, LEP, HERA and RHIC. These calculations unfortunately take a tremendous amount of computing time. In this review the method of determining chaotic orbit and applying the method to nonlinear problems in accelerator physics is discussed. We then discuss the scaling properties and effect of random sextupoles.« less
A Novel Particle Swarm Optimization Algorithm for Global Optimization
Wang, Chun-Feng; Liu, Kui
2016-01-01
Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms. PMID:26955387
An analysis of the physiologic parameters of intraoral wear: a review
NASA Astrophysics Data System (ADS)
Lawson, Nathaniel C.; Janyavula, Sridhar; Cakir, Deniz; Burgess, John O.
2013-10-01
This paper reviews the conditions of in vivo mastication and describes a novel method of measuring in vitro wear. Methods: parameters of intraoral wear are reviewed in this analysis, including chewing force, tooth sliding distance, food abrasivity, saliva lubrication, and antagonist properties. Results: clinical measurement of mastication forces indicates a range of normal forces between 20 and 140 N for a single molar. During the sliding phase of mastication, horizontal movement has been measured between 0.9 and 2.86 mm. In vivo wear occurs by three-body abrasion when food particles are interposed between teeth and by two-body abrasion after food clearance. Analysis of food particles used in wear testing reveals that food particles are softer than enamel and large enough to separate enamel and restoration surfaces and act as a solid lubricant. In two-body wear, saliva acts as a boundary lubricant with a viscosity of 3 cP. Enamel is the most relevant antagonist material for wear testing. The shape of a palatal cusp has been estimated as a 0.6 mm diameter ball and the hardest region of a tooth is its enamel surface. pH values and temperatures have been shown to range between 2-7 and 5-55 °C in intraoral fluids, respectively. These intraoral parameters have been used to modify the Alabama wear testing method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banerjee, Arka; Dalal, Neal, E-mail: abanerj6@illinois.edu, E-mail: dalaln@illinois.edu
We present a new method for simulating cosmologies that contain massive particles with thermal free streaming motion, such as massive neutrinos or warm/hot dark matter. This method combines particle and fluid descriptions of the thermal species to eliminate the shot noise known to plague conventional N-body simulations. We describe this method in detail, along with results for a number of test cases to validate our method, and check its range of applicability. Using this method, we demonstrate that massive neutrinos can produce a significant scale-dependence in the large-scale biasing of deep voids in the matter field. We show that thismore » scale-dependence may be quantitatively understood using an extremely simple spherical expansion model which reproduces the behavior of the void bias for different neutrino parameters.« less
Kim, Sunduk; Yang, Ji-Yeon; Kim, Ho-Hyun; Yeo, In-Young; Shin, Dong-Chun
2012-01-01
Objectives The purpose of this study was to assess the risk of ingestion exposure of lead by particle sizes of crumb rubber in artificial turf filling material with consideration of bioavailability. Methods This study estimated the ingestion exposure by particle sizes (more than 250 um or less than 250 um) focusing on recyclable ethylene propylene diene monomer crumb rubber being used as artificial turf filling. Analysis on crumb rubber was conducted using body ingestion exposure estimate method in which total content test method, acid extraction method and digestion extraction method are reflected. Bioavailability which is a calibrating factor was reflected in ingestion exposure estimate method and applied in exposure assessment and risk assessment. Two methods using acid extraction and digestion extraction concentration were compared and evaluated. Results As a result of the ingestion exposure of crumb rubber material, the average lead exposure amount to the digestion extraction result among crumb rubber was calculated to be 1.56×10-4 mg/kg-day for low grade elementary school students and 4.87×10-5 mg/kg-day for middle and high school students in 250 um or less particle size, and that to the acid extraction result was higher than the digestion extraction result. Results of digestion extraction and acid extraction showed that the hazard quotient was estimated by about over 2 times more in particle size of lower than 250 um than in higher than 250 um. There was a case of an elementary school student in which the hazard quotient exceeded 0.1. Conclusions Results of this study confirm that the exposure of lead ingestion and risk level increases as the particle size of crumb rubber gets smaller. PMID:22355803
Trojan horse particle invariance: The impact on nuclear astrophysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pizzone, R. G.; La Cognata, M.; Spitaleri, C.
In the current picture of nuclear astrophysics indirect methods and, in particular, the Trojan Horse Method cover a crucial role for the measurement of charged particle induced reactions cross sections of astrophysical interest, in the energy range required by the astrophysical scenarios. To better understand its cornerstones and its applications to physical cases many tests were performed to verify all its properties and the possible future perspectives. The key to the method is the quasi-free break-up and some of its properties will be investigated in the present work. In particular, the Trojan Horse nucleus invariance will be studied and previousmore » studies will be extended to the cases of the binary d(d, p)t and {sup 6}Li(d,α){sup 4}He reactions, which were tested using different quasi-free break-up's, namely {sup 6}Li and {sup 3}He. The astrophysical S(E)-factor were then extracted with the Trojan Horse formalism applied to the two different break-up schemes and compared with direct data as well as with previous indirect investigations. The very good agreement confirms the independence of binary indirect cross section on the chosen spectator particle also for these reactions.« less
Enriched reproducing kernel particle method for fractional advection-diffusion equation
NASA Astrophysics Data System (ADS)
Ying, Yuping; Lian, Yanping; Tang, Shaoqiang; Liu, Wing Kam
2018-06-01
The reproducing kernel particle method (RKPM) has been efficiently applied to problems with large deformations, high gradients and high modal density. In this paper, it is extended to solve a nonlocal problem modeled by a fractional advection-diffusion equation (FADE), which exhibits a boundary layer with low regularity. We formulate this method on a moving least-square approach. Via the enrichment of fractional-order power functions to the traditional integer-order basis for RKPM, leading terms of the solution to the FADE can be exactly reproduced, which guarantees a good approximation to the boundary layer. Numerical tests are performed to verify the proposed approach.
A novel rheometer design for yield stress fluids
Joseph R. Samaniuk; Timothy W. Shay; Thatcher W. Root; Daniel J. Klingenberg; C. Tim Scott
2014-01-01
An inexpensive, rapid method for measuring the rheological properties of yield stress fluids is described and tested. The method uses an auger that does not rotate during measurements, and avoids material and instrument-related difficulties, for example, wall slip and the presence of large particles, associated with yield stress fluids. The method can be used...
NASA Astrophysics Data System (ADS)
Zimoń, M. J.; Prosser, R.; Emerson, D. R.; Borg, M. K.; Bray, D. J.; Grinberg, L.; Reese, J. M.
2016-11-01
Filtering of particle-based simulation data can lead to reduced computational costs and enable more efficient information transfer in multi-scale modelling. This paper compares the effectiveness of various signal processing methods to reduce numerical noise and capture the structures of nano-flow systems. In addition, a novel combination of these algorithms is introduced, showing the potential of hybrid strategies to improve further the de-noising performance for time-dependent measurements. The methods were tested on velocity and density fields, obtained from simulations performed with molecular dynamics and dissipative particle dynamics. Comparisons between the algorithms are given in terms of performance, quality of the results and sensitivity to the choice of input parameters. The results provide useful insights on strategies for the analysis of particle-based data and the reduction of computational costs in obtaining ensemble solutions.
A generalized transport-velocity formulation for smoothed particle hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chi; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A.
The standard smoothed particle hydrodynamics (SPH) method suffers from tensile instability. In fluid-dynamics simulations this instability leads to particle clumping and void regions when negative pressure occurs. In solid-dynamics simulations, it results in unphysical structure fragmentation. In this work the transport-velocity formulation of Adami et al. (2013) is generalized for providing a solution of this long-standing problem. Other than imposing a global background pressure, a variable background pressure is used to modify the particle transport velocity and eliminate the tensile instability completely. Furthermore, such a modification is localized by defining a shortened smoothing length. The generalized formulation is suitable formore » fluid and solid materials with and without free surfaces. The results of extensive numerical tests on both fluid and solid dynamics problems indicate that the new method provides a unified approach for multi-physics SPH simulations.« less
NASA Astrophysics Data System (ADS)
Pekşen, Ertan; Yas, Türker; Kıyak, Alper
2014-09-01
We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.
Predicting Slag Generation in Sub-Scale Test Motors Using a Neural Network
NASA Technical Reports Server (NTRS)
Wiesenberg, Brent
1999-01-01
Generation of slag (aluminum oxide) is an important issue for the Reusable Solid Rocket Motor (RSRM). Thiokol performed testing to quantify the relationship between raw material variations and slag generation in solid propellants by testing sub-scale motors cast with propellant containing various combinations of aluminum fuel and ammonium perchlorate (AP) oxidizer particle sizes. The test data were analyzed using statistical methods and an artificial neural network. This paper primarily addresses the neural network results with some comparisons to the statistical results. The neural network showed that the particle sizes of both the aluminum and unground AP have a measurable effect on slag generation. The neural network analysis showed that aluminum particle size is the dominant driver in slag generation, about 40% more influential than AP. The network predictions of the amount of slag produced during firing of sub-scale motors were 16% better than the predictions of a statistically derived empirical equation. Another neural network successfully characterized the slag generated during full-scale motor tests. The success is attributable to the ability of neural networks to characterize multiple complex factors including interactions that affect slag generation.
FRACTIONAL AEROSOL FILTRATION EFFICIENCY OF IN-DUCT VENTILATION AIR CLEANERS
The filtration efficiency of ventilation air cleaners is highly particle-size dependent over the 0.01 to 3 μm diameter size range. Current standardized test methods, which determine only overall efficiencies for ambient aerosol or other test aerosols, provide data of limited util...
ERIC Educational Resources Information Center
Ziegler, Robert Edward
This study is concerned with determining the relative effectiveness of a static and dynamic theoretical model in teaching elementary school students to use the particle idea of matter when explaining certain physical phenomena. A clinical method of personal individual interview-testing, teaching, and retesting of a random sample population from…
Mabray, Marc C.; Lillaney, Prasheel; Sze, Chia-Hung; Losey, Aaron D.; Yang, Jeffrey; Kondapavulur, Sravani; Liu, Derek; Saeed, Maythem; Patel, Anand; Cooke, Daniel; Jun, Young-Wook; El-Sayed, Ivan; Wilson, Mark; Hetts, Steven W.
2015-01-01
Purpose To establish that a magnetic device designed for intravascular use can bind small iron particles in physiologic flow models. Materials and Methods Uncoated iron oxide particles 50–100 nm and 1–5 μm in size were tested in a water flow chamber over a period of 10 minutes without a magnet (ie, control) and with large and small prototype magnets. These same particles and 1-μm carboxylic acid–coated iron oxide beads were likewise tested in a serum flow chamber model without a magnet (ie, control) and with the small prototype magnet. Results Particles were successfully captured from solution. Particle concentrations in solution decreased in all experiments (P < .05 vs matched control runs). At 10 minutes, concentrations were 98% (50–100-nm particles in water with a large magnet), 97% (50–100-nm particles in water with a small magnet), 99% (1–5-μm particles in water with a large magnet), 99% (1–5-μm particles in water with a small magnet), 95% (50–100-nm particles in serum with a small magnet), 92% (1–5-μm particles in serum with a small magnet), and 75% (1-μm coated beads in serum with a small magnet) lower compared with matched control runs. Conclusions This study demonstrates the concept of magnetic capture of small iron oxide particles in physiologic flow models by using a small wire-mounted magnetic filter designed for intravascular use. PMID:26706187
NASA Astrophysics Data System (ADS)
Conny, Joseph M.; Ortiz-Montalvo, Diana L.
2017-09-01
We show the effect of composition heterogeneity and shape on the optical properties of urban dust particles based on the three-dimensional spatial and optical modeling of individual particles. Using scanning electron microscopy/energy-dispersive X-ray spectroscopy (SEM/EDX) and focused ion beam (FIB) tomography, spatial models of particles collected in Los Angeles and Seattle accounted for surface features, inclusions, and voids, as well as overall composition and shape. Using voxel data from the spatial models and the discrete dipole approximation method, we report extinction efficiency, asymmetry parameter, and single-scattering albedo (SSA). Test models of the particles involved (1) the particle's actual morphology as a single homogeneous phase and (2) simple geometric shapes (spheres, cubes, and tetrahedra) depicting composition homogeneity or heterogeneity (with multiple spheres). Test models were compared with a reference model, which included the particle's actual morphology and heterogeneity based on SEM/EDX and FIB tomography. Results show particle shape to be a more important factor for determining extinction efficiency than accounting for individual phases in a particle, regardless of whether absorption or scattering dominated. In addition to homogeneous models with the particles' actual morphology, tetrahedral geometric models provided better extinction accuracy than spherical or cubic models. For iron-containing heterogeneous particles, the asymmetry parameter and SSA varied with the composition of the iron-containing phase, even if the phase was <10% of the particle volume. For particles containing loosely held phases with widely varying refractive indexes (i.e., exhibiting "severe" heterogeneity), only models that account for heterogeneity may sufficiently determine SSA.
Light scattering methods to test inorganic PCMs for application in buildings
NASA Astrophysics Data System (ADS)
De Paola, M. G.; Calabrò, V.; De Simone, M.
2017-10-01
Thermal performance and stability over time are key parameters for the characterization and application of PCMs in the building sector. Generally, inorganic PCMs are dispersions of hydrated salts and additives in water that counteract phase segregation phenomena and subcooling. Traditional methods or in “house” methods can be used for evaluating thermal properties, while stability can be estimated over time by using optical techniques. By considering this double approach, in this work thermal and structural analyses of Glauber salt based composite PCMs are conducted by means of non-conventional equipment: T-history method (thermal analysis) and Turbiscan (stability analysis). Three samples with the same composition (Glauber salt with additives) were prepared by using different sonication times and their thermal performances were compared by testing both the thermal cycling and the thermal properties. The stability of the mixtures was verified by the identification of destabilization phenomena, the evaluation of the migration velocities of particles and the estimation of variation of particle size.
NASA Astrophysics Data System (ADS)
Bensiali, Bouchra; Bodi, Kowsik; Ciraolo, Guido; Ghendrih, Philippe; Liandrat, Jacques
2013-03-01
In this work, we compare different interpolation operators in the context of particle tracking with an emphasis on situations involving velocity field with steep gradients. Since, in this case, most classical methods give rise to the Gibbs phenomenon (generation of oscillations near discontinuities), we present new methods for particle tracking based on subdivision schemes and especially on the Piecewise Parabolic Harmonic (PPH) scheme which has shown its advantage in image processing in presence of strong contrasts. First an analytic univariate case with a discontinuous velocity field is considered in order to highlight the effect of the Gibbs phenomenon on trajectory calculation. Theoretical results are provided. Then, we show, regardless of the interpolation method, the need to use a conservative approach when integrating a conservative problem with a velocity field deriving from a potential. Finally, the PPH scheme is applied in a more realistic case of a time-dependent potential encountered in the edge turbulence of magnetically confined plasmas, to compare the propagation of density structures (turbulence bursts) with the dynamics of test particles. This study highlights the difference between particle transport and density transport in turbulent fields.
Single particle analysis based on Zernike phase contrast transmission electron microscopy.
Danev, Radostin; Nagayama, Kuniaki
2008-02-01
We present the first application of Zernike phase-contrast transmission electron microscopy to single-particle 3D reconstruction of a protein, using GroEL chaperonin as the test specimen. We evaluated the performance of the technique by comparing 3D models derived from Zernike phase contrast imaging, with models from conventional underfocus phase contrast imaging. The same resolution, about 12A, was achieved by both imaging methods. The reconstruction based on Zernike phase contrast data required about 30% fewer particles. The advantages and prospects of each technique are discussed.
Amorphous Silica Micro Powder Additive Influence on Tensile Strength of One-Ply Particle Board
NASA Astrophysics Data System (ADS)
Pitukhin, A. V.; Kolesnikov, G. N.; Panov, N. G.; Vasilyev, S. B.
2018-03-01
The methods and results of experimental investigation on the additive influence of amorphous silica micro powder when mixed in the glue for one-ply particle board are presented in the article. Wooden particles of coniferous and hardwood species as well as glue solution based on carbamide-formaldehyde resin were used for boards manufacturing. The amorphous silica micro powder contained particles on the average 8 μm by the size and specific surface 120…400 m2/g was used in experiment. The samples were tested to determine their physical-mechanical properties. It was found that 1 % amorphous silica micro powder additive increases the breaking point of one-ply particle board under tensile stress by 143 %.
Onion-shell model of cosmic ray acceleration in supernova remnants
NASA Technical Reports Server (NTRS)
Bogdan, T. J.; Volk, H. J.
1983-01-01
A method is devised to approximate the spatially averaged momentum distribution function for the accelerated particles at the end of the active lifetime of a supernova remnant. The analysis is confined to the test particle approximation and adiabatic losses are oversimplified, but unsteady shock motion, evolving shock strength, and non-uniform gas flow effects on the accelerated particle spectrum are included. Monoenergetic protons are injected at the shock front. It is found that the dominant effect on the resultant accelerated particle spectrum is a changing spectral index with shock strength. High energy particles are produced in early phases, and the resultant distribution function is a slowly varying power law over several orders of magnitude, independent of the specific details of the supernova remnant.
Fokker-Planck Equations of Stochastic Acceleration: A Study of Numerical Methods
NASA Astrophysics Data System (ADS)
Park, Brian T.; Petrosian, Vahe
1996-03-01
Stochastic wave-particle acceleration may be responsible for producing suprathermal particles in many astrophysical situations. The process can be described as a diffusion process through the Fokker-Planck equation. If the acceleration region is homogeneous and the scattering mean free path is much smaller than both the energy change mean free path and the size of the acceleration region, then the Fokker-Planck equation reduces to a simple form involving only the time and energy variables. in an earlier paper (Park & Petrosian 1995, hereafter Paper 1), we studied the analytic properties of the Fokker-Planck equation and found analytic solutions for some simple cases. In this paper, we study the numerical methods which must be used to solve more general forms of the equation. Two classes of numerical methods are finite difference methods and Monte Carlo simulations. We examine six finite difference methods, three fully implicit and three semi-implicit, and a stochastic simulation method which uses the exact correspondence between the Fokker-Planck equation and the it5 stochastic differential equation. As discussed in Paper I, Fokker-Planck equations derived under the above approximations are singular, causing problems with boundary conditions and numerical overflow and underflow. We evaluate each method using three sample equations to test its stability, accuracy, efficiency, and robustness for both time-dependent and steady state solutions. We conclude that the most robust finite difference method is the fully implicit Chang-Cooper method, with minor extensions to account for the escape and injection terms. Other methods suffer from stability and accuracy problems when dealing with some Fokker-Planck equations. The stochastic simulation method, although simple to implement, is susceptible to Poisson noise when insufficient test particles are used and is computationally very expensive compared to the finite difference method.
Weil, Mirco; Meißner, Tobias; Busch, Wibke; Springer, Armin; Kühnel, Dana; Schulz, Ralf; Duis, Karen
2015-10-15
For degradation of halogenated chemicals in groundwater Carbo-Iron®, a composite of activated carbon and nano-sized Fe(0), was developed (Mackenzie et al., 2012). Potential effects of this nanocomposite on fish were assessed. Beyond the contaminated zone Fe(0) can be expected to have oxidized and Carbo-Iron was used in its oxidized form in ecotoxicological tests. Potential effects of Carbo Iron in zebrafish (Danio rerio) were investigated using a 48 h embryo toxicity test under static conditions, a 96 h acute test with adult fish under semi-static conditions and a 34 d fish early life stage test (FELST) in a flow-through system. Particle diameters in test suspensions were determined via dynamic light scattering (DLS) and ranged from 266 to 497 nm. Particle concentrations were measured weekly in samples from the FELST using a method based on the count rate in DLS. Additionally, uptake of particles into test organisms was investigated using microscopic methods. Furthermore, effects of Carbo-Iron on gene expression were investigated by microarray analysis in zebrafish embryos. In all tests performed, no significant lethal effects were observed. Furthermore, Carbo-Iron had no significant influence on weight and length of fish as determined in the FELST. In the embryo test and the early life stage test, growth of fungi on the chorion was observed at Carbo-Iron concentrations between 6.3 and 25mg/L. Fungal growth did not affect survival, hatching success and growth. In the embryo test, no passage of Carbo-Iron particles into the perivitelline space or the embryo was observed. In juvenile and adult fish, Carbo-Iron was detected in the gut at the end of exposure. In juvenile fish exposed to Carbo-Iron for 29 d and subsequently kept for 5d in control water, Carbo-Iron was no longer detectable in the gut. Global gene expression in zebrafish embryos was not significantly influenced by Carbo-Iron. Copyright © 2015 Elsevier B.V. All rights reserved.
Liu, Dandan; Pan, Hao; He, Fengwei; Wang, Xiaoyu; Li, Jinyu; Yang, Xinggang; Pan, Weisan
2015-01-01
The purpose of this work was to explore the particle size reduction effect of carvedilol on dissolution and absorption. Three suspensions containing different sized particles were prepared by antisolvent precipitation method or in combination with an ultrasonication process. The suspensions were characterized for particle size, surface morphology, and crystalline state. The crystalline form of carvedilol was changed into amorphous form after antisolvent precipitation. The dissolution rate of carvedilol was significantly accelerated by a reduction in particle size. The intestinal absorption of carvedilol nanosuspensions was greatly improved in comparison with microsuspensions and solution in the in situ single-pass perfusion experiment. The in vivo evaluation demonstrated that carvedilol nanosuspensions and microsuspensions exhibited markedly increased Cmax (2.09- and 1.48-fold) and AUC0−t (2.11- and 1.51-fold), and decreased Tmax (0.34- and 0.48-fold) in contrast with carvedilol coarse suspensions. Moreover, carvedilol nanosuspensions showed good biocompatibility with the rat gastric mucosa in in vivo gastrointestinal irritation test. The entire results implicated that the dissolution rate and the oral absorption of carvedilol were significantly affected by the particle size. Particle size reduction to form nanosized particles was found to be an efficient method for improving the oral bioavailability of carvedilol. PMID:26508852
NASA Astrophysics Data System (ADS)
Iwata, Ayumi; Matsuki, Atsushi
2018-02-01
In order to better characterize ice nucleating (IN) aerosol particles in the atmosphere, we investigated the chemical composition, mixing state, and morphology of atmospheric aerosols that nucleate ice under conditions relevant for mixed-phase clouds. Five standard mineral dust samples (quartz, K-feldspar, Na-feldspar, Arizona test dust, and Asian dust source particles) were compared with actual aerosol particles collected from the west coast of Japan (the city of Kanazawa) during Asian dust events in February and April 2016. Following droplet activation by particles deposited on a hydrophobic Si (silicon) wafer substrate under supersaturated air, individual IN particles were located using an optical microscope by gradually cooling the temperature to -30 °C. For the aerosol samples, both the IN active particles and non-active particles were analyzed individually by atomic force microscopy (AFM), micro-Raman spectroscopy, and scanning electron microscopy (SEM) coupled with energy dispersive X-ray spectroscopy (EDX). Heterogeneous ice nucleation in all standard mineral dust samples tested in this study was observed at consistently higher temperatures (e.g., -22.2 to -24.2 °C with K-feldspar) than the homogeneous freezing temperature (-36.5 °C). Meanwhile, most of the IN active atmospheric particles formed ice below -28 °C, i.e., at lower temperatures than the standard mineral dust samples of pure components. The most abundant IN active particles above -30 °C were predominantly irregular solid particles that showed clay mineral characteristics (or mixtures of several mineral components). Other than clay, Ca-rich particles internally mixed with other components, such as sulfate, were also regarded as IN active particle types. Moreover, sea salt particles were predominantly found in the non-active fraction, and internal mixing with sea salt clearly acted as a significant inhibiting agent for the ice nucleation activity of mineral dust particles. Also, relatively pure or fresh calcite, Ca(NO3)2, and (NH4)2SO4 particles were more often found in the non-active fraction. In this study, we demonstrated the capability of the combined single droplet freezing method and thorough individual particle analysis to characterize the ice nucleation activity of atmospheric aerosols. We also found that dramatic changes in the particle mixing states during long-range transport had a complex effect on the ice nucleation activity of the host aerosol particles. A case study in the Asian dust outflow region highlighted the need to consider particle mixing states, which can dramatically influence ice nucleation activity.
A Method to Estimate Fabric Particle Penetration Performance
2014-09-08
may be needed to improve the correlation between wind tunnel component sleeve tests and bench top swatch test. The ability to predict multi-layered...within the fabric/component gap may be needed to improve the correlation between wind tunnel component sleeve tests and bench top swatch test...impermeable garment . Heat stress becomes a major problem with this approach however, as normal physiological heat loss mechanisms (especially sweat
A Case Study of Using Zero-Valent Iron Nanoparticles for Groundwater Remediation
NASA Astrophysics Data System (ADS)
Xiong, Z.; Kaback, D.; Bennett, P. J.
2011-12-01
Zero-valent iron nanoparticle (nZVI) is a promising technology for rapid in situ remediation of numerous contaminants, including chlorinated solvents, in groundwater and soil. Because of the high specific surface area of nZVI particles, this technology achieves treatment rates that are significantly faster than micron-scale and granular ZVI. However, a key technical challenge facing this technology involves agglomeration of nZVI particles. To improve nZVI mobility/deliverability and reactivity, an innovative method was recently developed using a low-cost and bio-degradable organic polymer as a stabilizer. This nZVI stabilization strategy offers unique advantages including: (1) the organic polymer is cost-effective and "green" (completely bio-compatible), (2) the organic polymer is highly effective in stabilizing nZVI particles; and (3) the stabilizer is applied during particle preparation, making nZVI particles more stable. Through a funding from the U.S. Air Force Center for Engineering and the Environment (AFCEE), AMEC performed a field study to test the effectiveness of this innovative technology for degradation of chlorinated solvents in groundwater at a military site. Laboratory treatability tests were conducted using groundwater samples collected from the test site and results indicated that trichloroethene (main groundwater contaminant at the site) was completely degraded within four hours by nZVI particles. In March and May 2011, two rounds of nZVI injection were performed at the test site. Approximately 700 gallons of nZVI suspension with palladium as a catalyst were successfully prepared in the field and injected into the subsurface. Before injection, membrane filters with a pore size of 450 nm were used to check the nZVI particle size and it was observed that >85% of nZVI particles were passed through the filter based on total iron measurement, indicating particle size of <450 nm. During field injections, nZVI particles were observed in a monitoring well located 5 feet downgradient from the injection well. Chlorinated solvent degradation products, e.g. ethane and ethene, increased significantly in monitoring wells following nZVI injections. Groundwater monitoring will be continued for approximately eight months following the last sampling event in July 2011 to demonstrate the performance of nZVI particles.
Exactly energy conserving semi-implicit particle in cell formulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapenta, Giovanni, E-mail: giovanni.lapenta@kuleuven.be
We report a new particle in cell (PIC) method based on the semi-implicit approach. The novelty of the new method is that unlike any of its semi-implicit predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. Recent research has presented fully implicit methods where energy conservation is obtained as part of a non-linear iteration procedure. The new method (referred to as Energy Conserving Semi-Implicit Method, ECSIM), instead, does not require any non-linear iteration and its computational cycle is similar to that of explicit PIC. The properties of the new method are: i) it conservesmore » energy exactly to round-off for any time step or grid spacing; ii) it is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency and allowing the user to select any desired time step; iii) it eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length; iv) the particle mover has a computational complexity identical to that of the explicit PIC, only the field solver has an increased computational cost. The new ECSIM is tested in a number of benchmarks where accuracy and computational performance are tested. - Highlights: • We present a new fully energy conserving semi-implicit particle in cell (PIC) method based on the implicit moment method (IMM). The new method is called Energy Conserving Implicit Moment Method (ECIMM). • The novelty of the new method is that unlike any of its predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. • The new method is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency. • The new method eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length. • These features are achieved at a reduced cost compared with either previous IMM or fully implicit implementation of PIC.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Yan; Mohanty, Soumya D.; Center for Gravitational Wave Astronomy, Department of Physics and Astronomy, University of Texas at Brownsville, 80 Fort Brown, Brownsville, Texas 78520
2010-03-15
The detection and estimation of gravitational wave signals belonging to a parameterized family of waveforms requires, in general, the numerical maximization of a data-dependent function of the signal parameters. Because of noise in the data, the function to be maximized is often highly multimodal with numerous local maxima. Searching for the global maximum then becomes computationally expensive, which in turn can limit the scientific scope of the search. Stochastic optimization is one possible approach to reducing computational costs in such applications. We report results from a first investigation of the particle swarm optimization method in this context. The method ismore » applied to a test bed motivated by the problem of detection and estimation of a binary inspiral signal. Our results show that particle swarm optimization works well in the presence of high multimodality, making it a viable candidate method for further applications in gravitational wave data analysis.« less
A particle finite element method for machining simulations
NASA Astrophysics Data System (ADS)
Sabel, Matthias; Sator, Christian; Müller, Ralf
2014-07-01
The particle finite element method (PFEM) appears to be a convenient technique for machining simulations, since the geometry and topology of the problem can undergo severe changes. In this work, a short outline of the PFEM-algorithm is given, which is followed by a detailed description of the involved operations. The -shape method, which is used to track the topology, is explained and tested by a simple example. Also the kinematics and a suitable finite element formulation are introduced. To validate the method simple settings without topological changes are considered and compared to the standard finite element method for large deformations. To examine the performance of the method, when dealing with separating material, a tensile loading is applied to a notched plate. This investigation includes a numerical analysis of the different meshing parameters, and the numerical convergence is studied. With regard to the cutting simulation it is found that only a sufficiently large number of particles (and thus a rather fine finite element discretisation) leads to converged results of process parameters, such as the cutting force.
A New Method for Tracking Individual Particles During Bed Load Transport in a Gravel-Bed River
NASA Astrophysics Data System (ADS)
Tremblay, M.; Marquis, G. A.; Roy, A. G.; Chaire de Recherche Du Canada En Dynamique Fluviale
2010-12-01
Many particle tracers (passive or active) have been developed to study gravel movement in rivers. It remains difficult, however, to document resting and moving periods and to know how particles travel from one deposition site to another. Our new tracking method uses the Hobo Pendant G acceleration Data Logger to quantitatively describe the motion of individual particles from the initiation of movement, through the displacement and to the rest, in a natural gravel river. The Hobo measures the acceleration in three dimensions at a chosen temporal frequency. The Hobo was inserted into 11 artificial rocks. The rocks were seeded in Ruisseau Béard, a small gravel-bed river in the Yamaska drainage basin (Québec) where the hydraulics, particle sizes and bed characteristics are well known. The signals recorded during eight floods (Summer and Fall 2008-2009) allowed us to develop an algorithm which classifies the periods of rest and motion. We can differentiate two types of motion: sliding and rolling. The particles can also vibrate while remaining in the same position. The examination of the movement and vibration periods with respect to the hydraulic conditions (discharge, shear stress, stream power) showed that vibration occurred mostly before the rise of hydrograph and allowed us to establish movement threshold and response times. In all cases, particle movements occurred during floods but not always in direct response to increased bed shear stress and stream power. This method offers great potential to track individual particles and to establish a spatiotemporal sequence of the intermittent transport of the particle during a flood and to test theories concerning the resting periods of particles on a gravel bed.
NASA Technical Reports Server (NTRS)
McDowell, Mark
2004-01-01
An integrated algorithm for decomposing overlapping particle images (multi-particle objects) along with determining each object s constituent particle centroid(s) has been developed using image analysis techniques. The centroid finding algorithm uses a modified eight-direction search method for finding the perimeter of any enclosed object. The centroid is calculated using the intensity-weighted center of mass of the object. The overlap decomposition algorithm further analyzes the object data and breaks it down into its constituent particle centroid(s). This is accomplished with an artificial neural network, feature based technique and provides an efficient way of decomposing overlapping particles. Combining the centroid finding and overlap decomposition routines into a single algorithm allows us to accurately predict the error associated with finding the centroid(s) of particles in our experiments. This algorithm has been tested using real, simulated, and synthetic data and the results are presented and discussed.
Göhler, Daniel; Stintz, Michael; Hillemann, Lars; Vorbau, Manuel
2010-01-01
Nanoparticles are used in industrial and domestic applications to control customized product properties. But there are several uncertainties concerning possible hazard to health safety and environment. Hence, it is necessary to search for methods to analyze the particle release from typical application processes. Based on a survey of commercial sanding machines, the relevant sanding process parameters were employed for the design of a miniature sanding test setup in a particle-free environment for the quantification of the nanoparticle release into air from surface coatings. The released particles were moved by a defined airflow to a fast mobility particle sizer and other aerosol measurement equipment to enable the determination of released particle numbers additionally to the particle size distribution. First, results revealed a strong impact of the coating material on the swarf mass and the number of released particles. PMID:20696941
NASA Technical Reports Server (NTRS)
Cornell, Stephen R.; Leser, William P.; Hochhalter, Jacob D.; Newman, John A.; Hartl, Darren J.
2014-01-01
A method for detecting fatigue cracks has been explored at NASA Langley Research Center. Microscopic NiTi shape memory alloy (sensory) particles were embedded in a 7050 aluminum alloy matrix to detect the presence of fatigue cracks. Cracks exhibit an elevated stress field near their tip inducing a martensitic phase transformation in nearby sensory particles. Detectable levels of acoustic energy are emitted upon particle phase transformation such that the existence and location of fatigue cracks can be detected. To test this concept, a fatigue crack was grown in a mode-I single-edge notch fatigue crack growth specimen containing sensory particles. As the crack approached the sensory particles, measurements of particle strain, matrix-particle debonding, and phase transformation behavior of the sensory particles were performed. Full-field deformation measurements were performed using a novel multi-scale optical 3D digital image correlation (DIC) system. This information will be used in a finite element-based study to determine optimal sensory material behavior and density.
Pyrogenic effect of respirable road dust particles
NASA Astrophysics Data System (ADS)
Jayawardena, Umesh; Tollemark, Linda; Tagesson, Christer; Leanderson, Per
2009-02-01
Because pyrogenic (fever-inducing) compounds on ambient particles may play an important role for particle toxicity, simple methods to measure pyrogens on particles are needed. Here we have used a modified in vitro pyrogen test (IPT) to study the release of interleukin 1β (IL-1β) in whole human blood exposed to respirable road-dust particles (RRDP). Road dusts were collected from the roadside at six different streets in three Swedish cities and particles with a diameter less than 10 μm (RRDP) were prepared by a water sedimentation procedure followed by lyophilisation. RRDP (200 μl of 1 - 106 ng/ml) were mixed with 50 μl whole blood and incubated at 37 °C overnight before IL-1β was analysed with chemiluminescence ELISA in 384-well plates. Endotoxin (lipopolysaccharide from Salmonella minnesota), zymosan B and Curdlan (P-1,3-glucan) were used as positive controls. All RRDP samples had a pyrogenic effect and the most active sample produced 1.6 times more IL-1β than the least active. This formation was of the same magnitude as in samples with 10 ng LPS/ml and was larger than that evoked by zymosan B and Curdlan (by mass basis). The method was sensitive enough to determine formation of IL-1β in mixtures with 10 ng RRDP/ml or 0.01 ng LPS/ml. The endotoxin inhibitor, polymyxin B (10 μg/ml), strongly reduced the RRDP-induced formation of IL-1β at 1μg RRDP/ml (around 80 % inhibition), but had only marginal or no effects at higher RRDP-concentrations (10 and 100 μg /ml). In summary, all RRDP tested had a clear pyrogen effect in this in vitro model. Endotoxin on the particles but also other factors contributed to the pyrogenic effect. As opposed to the limulus amebocyte lysate (LAL) assay (which measures endotoxin alone), IPT measures a broad range of pyrogens that may be present on particulate matter. The IPT method thus affords a simple, sensitive and quantitative determination of the total pyrogenic potential of ambient particles.
Field Assessment of Enclosed Cab Filtration System Performance Using Particle Counting Measurements
Organiscak, John A.; Cecala, Andrew B.; Noll, James D.
2015-01-01
Enclosed cab filtration systems are typically used on mobile mining equipment to reduce miners’ exposure to airborne dust generated during mining operations. The National Institute for Occupational Safety and Health (NIOSH) Office of Mine Safety and Health Research (OMSHR) has recently worked with a mining equipment manufacturer to examine a new cab filtration system design for underground industrial minerals equipment. This cab filtration system uses a combination of three particulate filters to reduce equipment operators’ exposure to dust and diesel particulates present in underground industrial mineral mines. NIOSH initially examined this cab filtration system using a two-instrument particle counting method at the equipment company’s manufacturing shop facility to assess several alternative filters. This cab filtration system design was further studied on several pieces of equipment during a two- to seven-month period at two underground limestone mines. The two-instrument particle counting method was used outside the underground mine at the end of the production shifts to regularly test the cabs’ long-term protection factor performance with particulates present in the ambient air. This particle counting method showed that three of the four cabs achieved protection factors greater than 1,000 during the field studies. The fourth cab did not perform at this level because it had a damaged filter in the system. The particle counting measurements of submicron particles present in the ambient air were shown to be a timely and useful quantification method in assessing cab performance during these field studies. PMID:23915268
Testosterone sorption and desorption: effects of soil particle size.
Qi, Yong; Zhang, Tian C; Ren, Yongzheng
2014-08-30
Soils contain a wide range of particles of different diameters with different mobility during rainfall events. Effects of soil particles on sorption and desorption behaviors of steroid hormones have not been investigated. In this study, wet sieve washing and repeated sedimentation methods were used to fractionate the soils into five ranges. The sorption and desorption properties and related mechanisms of testosterone in batch reactors filled with fractionated soil particles were evaluated. Results of sorption and desorption kinetics indicate that small soil particles have higher sorption and lower desorption rates than that of big ones. Thermodynamic results show the sorption processes are spontaneous and exothermal. The sorption capacity ranks as clay>silt>sand, depending mainly on specific surface area and surface functional groups. The urea control test shows that hydrogen bonding contributes to testosterone sorption onto clay and silt but not on sand. Desorption tests indicate sorption is 36-65% irreversible from clay to sand. Clays have highest desorption hysteresis among these five soil fractions, indicating small particles like clays have less potential for desorption. The results provide indirect evidence on the colloid (clay)-facilitated transport of hormones (micro-pollutants) in soil environments. Copyright © 2014 Elsevier B.V. All rights reserved.
A combined Eulerian-Lagrangian two-phase analysis of the SSME HPOTP nozzle plug trajectories
NASA Technical Reports Server (NTRS)
Garcia, Robert; Mcconnaughey, P. K.; Dejong, F. J.; Sabnis, J. S.; Pribik, D.
1989-01-01
As a result of high cycle fatigue, hydrogen embrittlement, and extended engine use, it was observed in testing that the trailing edge on the first stage nozzle plug in the High Pressure Oxygen Turbopump (HPOTP) could detach. The objective was to predict the trajectories followed by particles exiting the turbine. Experiments had shown that the heat exchanger soils, which lie downstream of the turbine, would be ruptured by particles traveling in the order of 360 ft/sec. An axisymmetric solution of the flow was obtained from the work of Lin et. al., who used INS3D to obtain the solution. The particle trajectories were obtained using the method of de Jong et. al., which employs Lagrangian tracking of the particle through the Eulerian flow field. The collision parameters were obtained from experiments conducted by Rocketdyne using problem specific alloys, speeds, and projectile geometries. A complete 3-D analysis using the most likely collision parameters shows maximum particle velocities of 200 ft/sec. in the heat exchanger region. Subsequent to this analysis, an engine level test was conducted in which seven particles passed through the turbine but no damage was observed on the heat exchanger coils.
NASA Astrophysics Data System (ADS)
Wilkins, C.; Bingley, L.; Angelopoulos, V.; Caron, R.; Cruce, P. R.; Chung, M.; Rowe, K.; Runov, A.; Liu, J.; Tsai, E.
2017-12-01
UCLA's Electron Losses and Fields Investigation (ELFIN) is a 3U+ CubeSat mission designed to study relativistic particle precipitation in Earth's polar regions from Low Earth Orbit. Upon its 2018 launch, ELFIN will aim to address an important open question in Space Physics: Are Electromagnetic Ion-Cyclotron (EMIC) waves the dominant source of pitch-angle scattering of high-energy radiation belt charged particles into Earth's atmosphere during storms and substorms? Previous studies have indicated these scattering events occur frequently during storms and substorms, and ELFIN will be the first mission to study this process in-situ.Paramount to ELFIN's success is its instrument suite consisting of an Energetic Particle Detector (EPD) and a Fluxgate Magnetometer (FGM). The EPD is comprised of two collimated solid-state detector stacks which will measure the incident flux of energetic electrons from 50 keV to 4 MeV and ions from 50 keV to 300 keV. The FGM is a 3-axis magnetic field sensor which will capture the local magnetic field and its variations at frequencies up to 5 Hz. The ELFIN spacecraft spins perpendicular to the geomagnetic field to provide 16 pitch-angle particle data sectors per revolution. Together these factors provide the capability to address the nature of radiation belt particle precipitation by pitch-angle scattering during storms and substorms.ELFIN's instrument development has progressed into the late Engineering Model (EM) phase and will soon enter Flight Model (FM) development. The instrument suite is currently being tested and calibrated at UCLA using a variety of methods including the use of radioactive sources and applied magnetics to simulate orbit conditions during spin sectoring. We present the methods and test results from instrument calibration and performance validation.
Influence of nano alumina coating on the flexural bond strength between zirconia and resin cement
Mumcu, Emre; Şen, Murat
2018-01-01
PURPOSE The purpose of this in vitro study is to examine the effects of a nano-structured alumina coating on the adhesion between resin cements and zirconia ceramics using a four-point bending test. MATERIALS AND METHODS 100 pairs of zirconium bar specimens were prepared with dimensions of 25 mm × 2 mm × 5 mm and cementation surfaces of 5 mm × 2 mm. The samples were divided into 5 groups of 20 pairs each. The groups are as follows: Group I (C) – Control with no surface modification, Group II (APA) – airborne-particle-abrasion with 110 µm high-purity aluminum oxide (Al2O3) particles, Group III (ROC) – airborne-particle-abrasion with 110 µm silica modified aluminum oxide (Al2O3 + SiO2) particles, Group IV (TCS) – tribochemical silica coated with Al2O3 particles, and Group V (AlC) – nano alumina coating. The surface modifications were assessed on two samples selected from each group by atomic force microscopy and scanning electron microscopy. The samples were cemented with two different self-adhesive resin cements. The bending bond strength was evaluated by mechanical testing. RESULTS According to the ANOVA results, surface treatments, different cement types, and their interactions were statistically significant (P<.05). The highest flexural bond strengths were obtained in nanostructured alumina coated zirconia surfaces (50.4 MPa) and the lowest values were obtained in the control group (12.00 MPa), both of which were cemented using a self-adhesive resin cement. CONCLUSION The surface modifications tested in the current study affected the surface roughness and flexural bond strength of zirconia. The nano alumina coating method significantly increased the flexural bond strength of zirconia ceramics. PMID:29503713
Particle characterization of poorly water-soluble drugs using a spray freeze drying technique.
Kondo, Masahiro; Niwa, Toshiyuki; Okamoto, Hirokazu; Danjo, Kazumi
2009-07-01
A spray freeze drying (SFD) method was developed to prepare the composite particles of poorly water-soluble drug. The aqueous solution dissolved drug and the functional polymer was sprayed directly into liquid nitrogen. Then, the iced droplets were lyophilized with freeze-dryer to prepare solid particles. Tolbutamide (TBM) and hydroxypropylmethylcellulose (HPMC) were used as a model drug and water-soluble polymeric carrier in this study, respectively. The morphological observation of particles revealed that the spherical particles having porous structure could be obtained by optimizing the loading amount of drug and polymer in the spray solution. Especially, SFD method was characterized that the prepared particles had significantly larger specific surface area comparing with those prepared by the standard spray drying technique. The physicochemical properties of the resultant particles were found to be dependent on the concentration of spray solution. When the solution with high content of drug and polymer was used, the particle size of the resulting composite particles increased and they became spherical. The specific surface area of the particles also increased as a result of higher concentration of solution. The evaluation of spray solution indicated that these results were dependent on the viscosity of spray solution. In addition, when composite particles of TBM were prepared using the SFD method with HPMC as a carrier, the crystallinity of TBM decreased as the proportion of HPMC increased. When the TBM : HPMC ratio reached 1 : 5, the crystallinity of the particles completely disappeared. The dissolution tests showed that the release profiles of poorly water-soluble TBM from SFD composite particles were drastically improved compared to bulk TBM. The 70% release time T(70) of composite particles prepared by the SFD method in a solution of pH 1.2 was quite smaller than that of bulk TBM, while in a solution of pH 6.8, it was slightly lower. In addition, the release rates were faster than those of standard spray dried (SD) composite particles for solutions of pH 1.2 and 6.8, respectively. When composite particles were prepared from mixtures with various composition ratios, T(70) was found to decrease as the proportion of HPMC increased; the release rate was faster than that of bulk TBM in a solution of pH 6.8, as well as solution of pH 1.2.
Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method
NASA Astrophysics Data System (ADS)
Gilbreth, C. N.; Alhassid, Y.
2015-03-01
Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.
Incompressible SPH method for simulating Newtonian and non-Newtonian flows with a free surface
NASA Astrophysics Data System (ADS)
Shao, Songdong; Lo, Edmond Y. M.
An incompressible smoothed particle hydrodynamics (SPH) method is presented to simulate Newtonian and non-Newtonian flows with free surfaces. The basic equations solved are the incompressible mass conservation and Navier-Stokes equations. The method uses prediction-correction fractional steps with the temporal velocity field integrated forward in time without enforcing incompressibility in the prediction step. The resulting deviation of particle density is then implicitly projected onto a divergence-free space to satisfy incompressibility through a pressure Poisson equation derived from an approximate pressure projection. Various SPH formulations are employed in the discretization of the relevant gradient, divergence and Laplacian terms. Free surfaces are identified by the particles whose density is below a set point. Wall boundaries are represented by particles whose positions are fixed. The SPH formulation is also extended to non-Newtonian flows and demonstrated using the Cross rheological model. The incompressible SPH method is tested by typical 2-D dam-break problems in which both water and fluid mud are considered. The computations are in good agreement with available experimental data. The different flow features between Newtonian and non-Newtonian flows after the dam-break are discussed.
A two step method to treat variable winds in fallout smearing codes. Master's thesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopkins, A.T.
1982-03-01
A method was developed to treat non-constant winds in fallout smearing codes. The method consists of two steps: (1) location of the curved hotline (2) determination of the off-hotline activity. To locate the curved hotline, the method begins with an initial cloud of 20 discretely-sized pancake clouds, located at altitudes determined by weapon yield. Next, the particles are tracked through a 300 layer atmosphere, translating with different winds in each layer. The connection of the 20 particles' impact points is the fallout hotline. The hotline location was found to be independent of the assumed particle size distribution in the stabilizedmore » cloud. The off-hotline activity distribution is represented as a two-dimensional gaussian function, centered on the curved hotline. Hotline locator model results were compared to numerical calculations of hypothetical 100 kt burst and to the actual hotline produced by the Castle-Bravo 15 Mt nuclear test.« less
Li, Juan; Kang, Ji; Wang, Li; Li, Zhen; Wang, Ren; Chen, Zheng Xing; Hou, Gary G
2012-07-04
A new method, a magnetic resonance imaging (MRI) technique characterized by T(2) relaxation time, was developed to study the water migration mechanism between arabinoxylan (AX) gels and gluten matrix in a whole wheat dough (WWD) system prepared from whole wheat flour (WWF) of different particle sizes. The water sequestration of AX gels in wheat bran was verified by the bran fortification test. The evaluations of baking quality of whole wheat bread (WWB) made from WWF with different particle sizes were performed by using SEM, FT-IR, and RP-HPLC techniques. Results showed that the WWB made from WWF of average particle size of 96.99 μm had better baking quality than those of the breads made from WWF of two other particle sizes, 50.21 and 235.40 μm. T(2) relaxation time testing indicated that the decreased particle size of WWF increased the water absorption of AX gels, which led to water migration from the gluten network to the AX gels and resulted in inferior baking quality of WWB.
NASA Astrophysics Data System (ADS)
Reyes López, Yaidel; Roose, Dirk; Recarey Morfa, Carlos
2013-05-01
In this paper, we present a dynamic refinement algorithm for the smoothed particle Hydrodynamics (SPH) method. An SPH particle is refined by replacing it with smaller daughter particles, which positions are calculated by using a square pattern centered at the position of the refined particle. We determine both the optimal separation and the smoothing distance of the new particles such that the error produced by the refinement in the gradient of the kernel is small and possible numerical instabilities are reduced. We implemented the dynamic refinement procedure into two different models: one for free surface flows, and one for post-failure flow of non-cohesive soil. The results obtained for the test problems indicate that using the dynamic refinement procedure provides a good trade-off between the accuracy and the cost of the simulations.
NASA Technical Reports Server (NTRS)
Bement, Laurence J.; Schimmel, Morry L.
1989-01-01
To overcome serious weaknesses in determining the performance of initiating devices, a novel 'ignitability test method', representing actual design interfaces and ignition materials, has been developed. Ignition device output consists of heat, light, gas an burning particles. Past research methods have evaluated these parameters individually. This paper describes the development and demonstration of an ignitability test method combining all these parameters, and the quantitative assessment of the ignition performance of two widely used percussion primers, the M42C1-PA101 and the M42C2-793. The ignition materials used for this evaluation were several powder, granule and pellet sizes of black powder and boron-potassium nitrate. This test method should be useful for performance evaluation of all initiator types, quality assurance, evaluation of ignition interfaces, and service life studies of initiators and ignition materials.
Test Plan - Solids Accumulation Scouting Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duignan, M. R.; Steeper, T. J.; Steimke, J. L.
This plan documents the highlights of the Solids Accumulations Scouting Studies test; a project, from Washington River Protection Solutions (WRPS), that began on February 1, 2012. During the last 12 weeks considerable progress has been made to design and plan methods that will be used to estimate the concentration and distribution of heavy fissile solids in accumulated solids in the Hanford double-shell tank (DST) 241-AW-105 (AW-105), which is the primary goal of this task. This DST will be one of the several waste feed delivery staging tanks designated to feed the Pretreatment Facility (PTF) of the Waste Treatment and Immobilizationmore » Plant (WTP). Note that over the length of the waste feed delivery mission AW-105 is currently identified as having the most fill empty cycles of any DST feed tanks, which is the reason for modeling this particular tank. At SRNL an existing test facility, the Mixing Demonstration Tank, which will be modified for the present work, will use stainless steel particles in a simulant that represents Hanford waste to perform mock staging tanks transfers that will allow solids to accumulate in the tank heel. The concentration and location of the mock fissile particles will be measured in these scoping studies to produce information that will be used to better plan larger scaled tests. Included in these studies is a secondary goal of developing measurement methods to accomplish the primary goal. These methods will be evaluated for use in the larger scale experiments. Included in this plan are the several pretest activities that will validate the measurement techniques that are currently in various phases of construction. Aspects of each technique, e.g., particle separations, volume determinations, topographical mapping, and core sampling, have been tested in bench-top trials, as discussed herein, but the actual equipment to be employed during the full test will need evaluation after fabrication and integration into the test facility.« less
Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; ...
2016-08-09
Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowedmore » by typical CFL restrictions.« less
Acoustic agglomeration of fine particles based on a high intensity acoustical resonator
NASA Astrophysics Data System (ADS)
Zhao, Yun; Zeng, Xinwu; Tian, Zhangfu
2015-10-01
Acoustic agglomeration (AA) is considered to be a promising method for reducing the air pollution caused by fine aerosol particles. Removal efficiency and energy consuming are primary parameters and generally have a conflict with each other for the industry applications. It was proved that removal efficiency is increased with sound intensity and optimal frequency is presented for certain polydisperse aerosol. As a result, a high efficiency and low energy cost removal system was constructed using acoustical resonance. High intensity standing wave is generated by a tube system with abrupt section driven by four loudspeakers. Numerical model of the tube system was built base on the finite element method, and the resonance condition and SPL increase were confirmd. Extensive tests were carried out to investigate the acoustic field in the agglomeration chamber. Removal efficiency of fine particles was tested by the comparison of filter paper mass and particle size distribution at different operating conditions including sound pressure level (SPL), and frequency. The experimental study has demonstrated that agglomeration increases with sound pressure level. Sound pressure level in the agglomeration chamber is between 145 dB and 165 dB from 500 Hz to 2 kHz. The resonance frequency can be predicted with the quarter tube theory. Sound pressure level gain of more than 10 dB is gained at resonance frequency. With the help of high intensity sound waves, fine particles are reduced greatly, and the AA effect is enhanced at high SPL condition. The optimal frequency is 1.1kHz for aerosol generated by coal ash. In the resonace tube, higher resonance frequencies are not the integral multiplies of the first one. As a result, Strong nonlinearity is avoided by the dissonant characteristic and shock wave is not found in the testing results. The mechanism and testing system can be used effectively in industrial processes in the future.
Mayhew, Terry M; Lucocq, John M
2011-03-01
Various methods for quantifying cellular immunogold labelling on transmission electron microscope thin sections are currently available. All rely on sound random sampling principles and are applicable to single immunolabelling across compartments within a given cell type or between different experimental groups of cells. Although methods are also available to test for colocalization in double/triple immunogold labelling studies, so far, these have relied on making multiple measurements of gold particle densities in defined areas or of inter-particle nearest neighbour distances. Here, we present alternative two-step approaches to codistribution and colocalization assessment that merely require raw counts of gold particles in distinct cellular compartments. For assessing codistribution over aggregate compartments, initial statistical evaluation involves combining contingency table and chi-squared analyses to provide predicted gold particle distributions. The observed and predicted distributions allow testing of the appropriate null hypothesis, namely, that there is no difference in the distribution patterns of proteins labelled by different sizes of gold particle. In short, the null hypothesis is that of colocalization. The approach for assessing colabelling recognises that, on thin sections, a compartment is made up of a set of sectional images (profiles) of cognate structures. The approach involves identifying two groups of compartmental profiles that are unlabelled and labelled for one gold marker size. The proportions in each group that are also labelled for the second gold marker size are then compared. Statistical analysis now uses a 2 × 2 contingency table combined with the Fisher exact probability test. Having identified double labelling, the profiles can be analysed further in order to identify characteristic features that might account for the double labelling. In each case, the approach is illustrated using synthetic and/or experimental datasets and can be refined to correct observed labelling patterns to specific labelling patterns. These simple and efficient approaches should be of more immediate utility to those interested in codistribution and colocalization in multiple immunogold labelling investigations.
Filter Media Tests Under Simulated Martian Atmospheric Conditions
NASA Technical Reports Server (NTRS)
Agui, Juan H.
2016-01-01
Human exploration of Mars will require the optimal utilization of planetary resources. One of its abundant resources is the Martian atmosphere that can be harvested through filtration and chemical processes that purify and separate it into its gaseous and elemental constituents. Effective filtration needs to be part of the suite of resource utilization technologies. A unique testing platform is being used which provides the relevant operational and instrumental capabilities to test articles under the proper simulated Martian conditions. A series of tests were conducted to assess the performance of filter media. Light sheet imaging of the particle flow provided a means of detecting and quantifying particle concentrations to determine capturing efficiencies. The media's efficiency was also evaluated by gravimetric means through a by-layer filter media configuration. These tests will help to establish techniques and methods for measuring capturing efficiency and arrestance of conventional fibrous filter media. This paper will describe initial test results on different filter media.
Donovan, Ariel R; Adams, Craig D; Ma, Yinfa; Stephan, Chady; Eichholz, Todd; Shi, Honglan
2016-02-01
One of the most direct means for human exposure to nanoparticles (NPs) released into the environment is drinking water. Therefore, it is critical to understand the occurrence and fate of NPs in drinking water systems. The objectives of this study were to develop rapid and reliable analytical methods and apply them to investigate the fate and transportation of NPs during drinking water treatments. Rapid single particle ICP-MS (SP-ICP-MS) methods were developed to characterize and quantify titanium-containing, titanium dioxide, silver, and gold NP concentration, size, size distribution, and dissolved metal element concentration in surface water and treated drinking water. The effectiveness of conventional drinking water treatments (including lime softening, alum coagulation, filtration, and disinfection) to remove NPs from surface water was evaluated using six-gang stirrer jar test simulations. The selected NPs were nearly completely (97 ± 3%) removed after lime softening and alum coagulation/activated carbon adsorption treatments. Additionally, source and drinking waters from three large drinking water treatment facilities utilizing similar treatments with the simulation test were collected and analyzed by the SP-ICP-MS methods. Ti-containing particles and dissolved Ti were present in the river water samples, but Ag and Au were not present. Treatments used at each drinking water treatment facility effectively removed over 93% of the Ti-containing particles and dissolved Ti from the source water. Copyright © 2015 Elsevier Ltd. All rights reserved.
Lin, Tzu-Hsien; Chen, Chih-Chieh; Kuo, Chung-Wen
2017-01-01
This study investigates the effects of five decontamination methods on the filter quality (qf) of three commercially available electret masks—N95, Gauze and Spunlace nonwoven masks. Newly developed evaluation methods, the overall filter quality (qf,o) and the qf ratio were applied to evaluate the effectiveness of decontamination methods for respirators. A scanning mobility particle sizer is utilized to measure the concentration of polydispersed particles with diameter 14.6–594 nm. The penetration of particles and pressure drop (Δp) through the mask are used to determine qf and qf,o. Experimental results reveal that the most penetrating particle size (MPS) for the pre-decontaminated N95, Gauze and Spunlace masks were 118 nm, 461 nm and 279 nm, respectively, and the respective penetration rates were 2.6%, 23.2% and 70.0%. The Δp through the pretreated N95 masks was 9.2 mm H2O at the breathing flow rate of heavy-duty workers, exceeding the Δp values obtained through Gauze and Spunlace masks. Decontamination increased the sizes of the most penetrating particles, changing the qf values of all of the masks: qf fell as particle size increased because the penetration increased. Bleach increased the Δp of N95, but destroyed the Gauze mask. However, the use of an autoclave reduces the Δp values of both the N95 and the Gauze mask. Neither the rice cooker nor ethanol altered the Δp of the Gauze mask. Chemical decontamination methods reduced the qf,o values for the three electret masks. The value of qf,o for PM0.1 exceeded that for PM0.1–0.6, because particles smaller than 100 nm had lower penetration, resulting in a better qf for a given pressure drop. The values of qf,o, particularly for PM0.1, reveal that for the tested treatments and masks, physical decontamination methods are less destructive to the filter than chemical methods. Nevertheless, when purchasing new or reusing FFRs, penetration should be regarded as the priority. PMID:29023492
NASA Astrophysics Data System (ADS)
Hu, Qi; Duan, Jin; Wang, LiNing; Zhai, Di
2016-09-01
The high-efficiency simulation test of military weapons has a very important effect on the high cost of the actual combat test and the very demanding operational efficiency. Especially among the simulative emulation methods of the explosive smoke, the simulation method based on the particle system has generated much attention. In order to further improve the traditional simulative emulation degree of the movement process of the infrared decoy during the real combustion cycle, this paper, adopting the virtual simulation platform of OpenGL and Vega Prime and according to their own radiation characteristics and the aerodynamic characteristics of the infrared decoy, has simulated the dynamic fuzzy characteristics of the infrared decoy during the real combustion cycle by using particle system based on the double depth peeling algorithm and has solved key issues such as the interface, coordinate conversion and the retention and recovery of the Vega Prime's status. The simulation experiment has basically reached the expected improvement purpose, effectively improved the simulation fidelity and provided theoretical support for improving the performance of the infrared decoy.
An External Archive-Guided Multiobjective Particle Swarm Optimization Algorithm.
Zhu, Qingling; Lin, Qiuzhen; Chen, Weineng; Wong, Ka-Chun; Coello Coello, Carlos A; Li, Jianqiang; Chen, Jianyong; Zhang, Jun
2017-09-01
The selection of swarm leaders (i.e., the personal best and global best), is important in the design of a multiobjective particle swarm optimization (MOPSO) algorithm. Such leaders are expected to effectively guide the swarm to approach the true Pareto optimal front. In this paper, we present a novel external archive-guided MOPSO algorithm (AgMOPSO), where the leaders for velocity update are all selected from the external archive. In our algorithm, multiobjective optimization problems (MOPs) are transformed into a set of subproblems using a decomposition approach, and then each particle is assigned accordingly to optimize each subproblem. A novel archive-guided velocity update method is designed to guide the swarm for exploration, and the external archive is also evolved using an immune-based evolutionary strategy. These proposed approaches speed up the convergence of AgMOPSO. The experimental results fully demonstrate the superiority of our proposed AgMOPSO in solving most of the test problems adopted, in terms of two commonly used performance measures. Moreover, the effectiveness of our proposed archive-guided velocity update method and immune-based evolutionary strategy is also experimentally validated on more than 30 test MOPs.
The Synthesis of Photocatalyst Material ZnO using the Simple Sonication Method
NASA Astrophysics Data System (ADS)
Faradis, R.; Azizah, E. N.; Marella, S. D.; Aini, N.; Prasetyo, A.
2018-03-01
ZnO is well known as photocatalyst material therefore potentially to applied in many purposes. The particle size of photocatalyst material influenced the catalytic activities. In this research, ZnO was synthesized using the simple sonication method to obtain the the smaller particle with sonication time variation respectively: 30, 60, 160, 360 minute. X-ray diffraction data showed that the synthesized material have wurtzite structure with space group P63 mc. The synthesized ZnO with 30 minutes sonication time produced the smallest particle size and have the lowest band gap energy (2.79 eV). The photocatalytic test at methylene blue also showed that the optimum activity was gained from ZnO which synthesized at 30 minute sonication time (degradation percentage of metylene blue is 77.93%).
Evaluation strategies for isotope ratio measurements of single particles by LA-MC-ICPMS.
Kappel, S; Boulyga, S F; Dorta, L; Günther, D; Hattendorf, B; Koffler, D; Laaha, G; Leisch, F; Prohaska, T
2013-03-01
Data evaluation is a crucial step when it comes to the determination of accurate and precise isotope ratios computed from transient signals measured by multi-collector-inductively coupled plasma mass spectrometry (MC-ICPMS) coupled to, for example, laser ablation (LA). In the present study, the applicability of different data evaluation strategies (i.e. 'point-by-point', 'integration' and 'linear regression slope' method) for the computation of (235)U/(238)U isotope ratios measured in single particles by LA-MC-ICPMS was investigated. The analyzed uranium oxide particles (i.e. 9073-01-B, CRM U010 and NUSIMEP-7 test samples), having sizes down to the sub-micrometre range, are certified with respect to their (235)U/(238)U isotopic signature, which enabled evaluation of the applied strategies with respect to precision and accuracy. The different strategies were also compared with respect to their expanded uncertainties. Even though the 'point-by-point' method proved to be superior, the other methods are advantageous, as they take weighted signal intensities into account. For the first time, the use of a 'finite mixture model' is presented for the determination of an unknown number of different U isotopic compositions of single particles present on the same planchet. The model uses an algorithm that determines the number of isotopic signatures by attributing individual data points to computed clusters. The (235)U/(238)U isotope ratios are then determined by means of the slopes of linear regressions estimated for each cluster. The model was successfully applied for the accurate determination of different (235)U/(238)U isotope ratios of particles deposited on the NUSIMEP-7 test samples.
High Pressure Quick Disconnect Particle Impact Tests
NASA Technical Reports Server (NTRS)
Rosales, Keisa R.; Stoltzfus, Joel M.
2009-01-01
NASA Johnson Space Center White Sands Test Facility (WSTF) performed particle impact testing to determine whether there is a particle impact ignition hazard in the quick disconnects (QDs) in the Environmental Control and Life Support System (ECLSS) on the International Space Station (ISS). Testing included standard supersonic and subsonic particle impact tests on 15-5 PH stainless steel, as well as tests performed on a QD simulator. This paper summarizes the particle impact tests completed at WSTF. Although there was an ignition in Test Series 4, it was determined the ignition was caused by the presence of a machining imperfection. The sum of all the test results indicates that there is no particle impact ignition hazard in the ISS ECLSS QDs. KEYWORDS: quick disconnect, high pressure, particle impact testing, stainless steel
NASA Astrophysics Data System (ADS)
Qin, Pin-pin; Chen, Chui-ce; Pei, Shi-kang; Li, Xin
2017-06-01
The stopping distance of a runaway vehicle is determined by the entry speed, the design of aggregate-filled arrester bed and the longitudinal grade of escape ramp. Although numerous previous studies have been carried out on the influence of speed and grade on stopping distance, taking into account aggregate properties is rare. Firstly, this paper analyzes the interactions between the tire and the aggregate. The tire and the aggregate are abstracted into a big particle unit and a particle combination unit consisting of lots of aggregates, respectively. Secondly this paper proposes an assumption that this interaction is a kind of particle flow. Later, this paper uses some particle properties to describe the tire-particle unit and aggregate-particle unit respectively, then puts forward several simplified steps of modeling by particle flow code in 2 dimensions (PFC2D). Therefore, a PFC2D micro-simulation model of the interactions between the tire and the aggregate is proposed. The parameters of particle properties are then calibrated by three groups of numerical tests. The calibrated model is verified by eight full-scale arrester bed testing data to demonstrate its feasibility and accuracy. This model provides escape ramp designers a feasible simulation method not only for predicting the stopping distance but also considering the aggregate properties.
New algorithm and system for measuring size distribution of blood cells
NASA Astrophysics Data System (ADS)
Yao, Cuiping; Li, Zheng; Zhang, Zhenxi
2004-06-01
In optical scattering particle sizing, a numerical transform is sought so that a particle size distribution can be determined from angular measurements of near forward scattering, which has been adopted in the measurement of blood cells. In this paper a new method of counting and classification of blood cell, laser light scattering method from stationary suspensions, is presented. The genetic algorithm combined with nonnegative least squared algorithm is employed to inverse the size distribution of blood cells. Numerical tests show that these techniques can be successfully applied to measuring size distribution of blood cell with high stability.
Gravity inversion of a fault by Particle swarm optimization (PSO).
Toushmalani, Reza
2013-01-01
Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models.
Code C# for chaos analysis of relativistic many-body systems with reactions
NASA Astrophysics Data System (ADS)
Grossu, I. V.; Besliu, C.; Jipa, Al.; Stan, E.; Esanu, T.; Felea, D.; Bordeianu, C. C.
2012-04-01
In this work we present a reaction module for “Chaos Many-Body Engine” (Grossu et al., 2010 [1]). Following our goal of creating a customizable, object oriented code library, the list of all possible reactions, including the corresponding properties (particle types, probability, cross section, particle lifetime, etc.), could be supplied as parameter, using a specific XML input file. Inspired by the Poincaré section, we propose also the “Clusterization Map”, as a new intuitive analysis method of many-body systems. For exemplification, we implemented a numerical toy-model for nuclear relativistic collisions at 4.5 A GeV/c (the SKM200 Collaboration). An encouraging agreement with experimental data was obtained for momentum, energy, rapidity, and angular π distributions. Catalogue identifier: AEGH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 184 628 No. of bytes in distributed program, including test data, etc.: 7 905 425 Distribution format: tar.gz Programming language: Visual C#.NET 2005 Computer: PC Operating system: Net Framework 2.0 running on MS Windows Has the code been vectorized or parallelized?: Each many-body system is simulated on a separate execution thread. One processor used for each many-body system. RAM: 128 Megabytes Classification: 6.2, 6.5 Catalogue identifier of previous version: AEGH_v1_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 1464 External routines: Net Framework 2.0 Library Does the new version supersede the previous version?: Yes Nature of problem: Chaos analysis of three-dimensional, relativistic many-body systems with reactions. Solution method: Second order Runge-Kutta algorithm for simulating relativistic many-body systems with reactions. Object oriented solution, easy to reuse, extend and customize, in any development environment which accepts .Net assemblies or COM components. Treatment of two particles reactions and decays. For each particle, calculation of the time measured in the particle reference frame, according to the instantaneous velocity. Possibility to dynamically add particle properties (spin, isospin, etc.), and reactions/decays, using a specific XML input file. Basic support for Monte Carlo simulations. Implementation of: Lyapunov exponent, “fragmentation level”, “average system radius”, “virial coefficient”, “clusterization map”, and energy conservation precision test. As an example of use, we implemented a toy-model for nuclear relativistic collisions at 4.5 A GeV/c. Reasons for new version: Following our goal of applying chaos theory to nuclear relativistic collisions at 4.5 A GeV/c, we developed a reaction module integrated with the Chaos Many-Body Engine. In the previous version, inheriting the Particle class was the only possibility of implementing more particle properties (spin, isospin, and so on). In the new version, particle properties can be dynamically added using a dictionary object. The application was improved in order to calculate the time measured in the own reference frame of each particle. two particles reactions: a+b→c+d, decays: a→c+d, stimulated decays, more complicated schemas, implemented as various combinations of previous reactions. Following our goal of creating a flexible application, the reactions list, including the corresponding properties (cross sections, particles lifetime, etc.), could be supplied as parameter, using a specific XML configuration file. The simulation output files were modified for systems with reactions, assuring also the backward compatibility. We propose the “Clusterization Map” as a new investigation method of many-body systems. The multi-dimensional Lyapunov Exponent was adapted in order to be used for systems with variable structure. Basic support for Monte Carlo simulations was also added. Additional comments: Windows forms application for testing the engine. Easy copy/paste based deployment method. Running time: Quadratic complexity.
Performing particle image velocimetry using artificial neural networks: a proof-of-concept
NASA Astrophysics Data System (ADS)
Rabault, Jean; Kolaas, Jostein; Jensen, Atle
2017-12-01
Traditional programs based on feature engineering are underperforming on a steadily increasing number of tasks compared with artificial neural networks (ANNs), in particular for image analysis. Image analysis is widely used in fluid mechanics when performing particle image velocimetry (PIV) and particle tracking velocimetry (PTV), and therefore it is natural to test the ability of ANNs to perform such tasks. We report for the first time the use of convolutional neural networks (CNNs) and fully connected neural networks (FCNNs) for performing end-to-end PIV. Realistic synthetic images are used for training the networks and several synthetic test cases are used to assess the quality of each network’s predictions and compare them with state-of-the-art PIV software. In addition, we present tests on real-world data that prove ANNs can be used not only with synthetic images but also with more noisy, imperfect images obtained in a real experimental setup. While the ANNs we present have slightly higher root mean square error than state-of-the-art cross-correlation methods, they perform better near edges and allow for higher spatial resolution than such methods. In addition, it is likely that one could with further work develop ANNs which perform better that the proof-of-concept we offer.
Absorption property of C@CIPs composites by the mechanical milling process
NASA Astrophysics Data System (ADS)
Liu, Ting; Zhou, Li; Zheng, Dianliang; Xu, Yonggang
2017-09-01
The C@CIPs absorbents were fabricated by the mechanical milling method. The particle morphology and crystal grain structure were characterized by the scanning electron microscopy and the X-ray diffraction patterns, respectively. The complex permittivity and permeability of the absorbing composites added the hybrid particles were tested in 2-18 GHz. The reflection loss (RL) and shielding effectiveness were calculated using the tested parameters. It was found that the MWCNTs were bonded to the CIPs surface. The permittivity and permeability of the C@CIPs were increased as the MWCNTs coated on the CIPs. It was attributed to the dielectric property of MWCNTs, particle shape and the interactions of the two particles according to the Debye equation and the Maxwell-Garnett mixing rule. The C@CIPs composites had a better absorbing property as RL < -4 dB in 4.6-17 GHz with thickness 0.6 mm as well as shielding property (maximum 12.7 dB) in 2-18 GHz. It indicated that C@CIPs might be an effective absorbing/shielding absorbent.
NASA Astrophysics Data System (ADS)
Razzaqi, A.; Liaghat, Gh.; Razmkhah, O.
2017-10-01
In this paper, mechanical properties of Aluminum (Al) matrix nano-composites, fabricated by Powder Metallurgy (PM) method, has been investigated. Alumina (Al2O3) nano particles were added in amounts of 0, 2.5, 5, 7.5 and 10 weight percentages (wt%). For this purpose, Al powder (particle size: 20 µm) and nano-Al2O3 (particle size: 20 nm) in various weight percentages were mixed and milled in a blade mixer for 15 minutes in 1500 rpm. Then, the obtained mixture, compacted by means of a two piece die and uniaxial cold press of about 600 MPa and cold iso-static press (CIP), required for different tests. After that, the samples sintered in 600°C for 90 minutes. Compression and three-point bending tests performed on samples and the results, led us to obtain the optimized particle size for achieving best mechanical properties.
Sharif, Elham; Kiely, Janice; Wraith, Patrick; Luxton, Richard
2013-05-01
A novel, integrated lysis and immunoassay methodology and system for intracellular protein measurement are described. The method uses paramagnetic particles both as a lysis agent and assay label resulting in a rapid test requiring minimal operator intervention, the test being homogeneous and completed in less than 10 min. A design study highlights the critical features of the magnetic detection system used to quantify the paramagnetic particles and a novel frequency-locked loop-based magnetometer is presented. A study of paramagnetic particle enhanced lysis demonstrates that the technique is more than twice as efficient at releasing intracellular protein as ultrasonic lysis alone. Results are presented for measurements of intracellular prostate specific antigen in an LNCAP cell line. This model was selected to demonstrate the rapidity and efficiency of intracellular protein quantification. It was shown that, on average, LNCAP cells contained 0.43 fg of prostate specific antigen. This system promises an attractive solution for applications that require a rapid determination of intracellular proteins.
Observation and Control of Hamiltonian Chaos in Wave-particle Interaction
NASA Astrophysics Data System (ADS)
Doveil, F.; Elskens, Y.; Ruzzon, A.
2010-11-01
Wave-particle interactions are central in plasma physics. The paradigm beam-plasma system can be advantageously replaced by a traveling wave tube (TWT) to allow their study in a much less noisy environment. This led to detailed analysis of the self-consistent interaction between unstable waves and an either cold or warm electron beam. More recently a test cold beam has been used to observe its interaction with externally excited wave(s). This allowed observing the main features of Hamiltonian chaos and testing a new method to efficiently channel chaotic transport in phase space. To simulate accurately and efficiently the particle dynamics in the TWT and other 1D particle-wave systems, a new symplectic, symmetric, second order numerical algorithm is developed, using particle position as the independent variable, with a fixed spatial step. This contribution reviews : presentation of the TWT and its connection to plasma physics, resonant interaction of a charged particle in electrostatic waves, observation of particle trapping and transition to chaos, test of control of chaos, and description of the simulation algorithm. The velocity distribution function of the electron beam is recorded with a trochoidal energy analyzer at the output of the TWT. An arbitrary waveform generator is used to launch a prescribed spectrum of waves along the 4m long helix of the TWT. The nonlinear synchronization of particles by a single wave, responsible for Landau damping, is observed. We explore the resonant velocity domain associated with a single wave as well as the transition to large scale chaos when the resonant domains of two waves and their secondary resonances overlap. This transition exhibits a devil's staircase behavior when increasing the excitation level in agreement with numerical simulation. A new strategy for control of chaos by building barriers of transport in phase space as well as its robustness is successfully tested. The underlying concepts extend far beyond the field of electron devices and plasma physics.
NASA Astrophysics Data System (ADS)
Chen, Chaochao; Vachtsevanos, George; Orchard, Marcos E.
2012-04-01
Machine prognosis can be considered as the generation of long-term predictions that describe the evolution in time of a fault indicator, with the purpose of estimating the remaining useful life (RUL) of a failing component/subsystem so that timely maintenance can be performed to avoid catastrophic failures. This paper proposes an integrated RUL prediction method using adaptive neuro-fuzzy inference systems (ANFIS) and high-order particle filtering, which forecasts the time evolution of the fault indicator and estimates the probability density function (pdf) of RUL. The ANFIS is trained and integrated in a high-order particle filter as a model describing the fault progression. The high-order particle filter is used to estimate the current state and carry out p-step-ahead predictions via a set of particles. These predictions are used to estimate the RUL pdf. The performance of the proposed method is evaluated via the real-world data from a seeded fault test for a UH-60 helicopter planetary gear plate. The results demonstrate that it outperforms both the conventional ANFIS predictor and the particle-filter-based predictor where the fault growth model is a first-order model that is trained via the ANFIS.
Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Jianyuan; Qin, Hong; Liu, Jian
2015-11-01
Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint arXiv: 1505.06076 (2015)], which produces fivemore » exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave. (C) 2015 AIP Publishing LLC.« less
Submicrometer Particle Sizing by Multiangle Light Scattering following Fractionation
Wyatt
1998-01-01
The acid test for any particle sizing technique is its ability to determine the differential number fraction size distribution of a simple, well-defined sample. The very best characterized polystyrene latex sphere standards have been measured extensively using transmission electron microscope (TEM) images of a large subpopulation of such samples or by means of the electrostatic classification method as refined at the National Institute of Standards and Technology. The great success, in the past decade, of on-line multiangle light scattering (MALS) detection combined with size exclusion chromatography for the measurement of polymer mass and size distributions suggested, in the early 1990s, that a similar attack for particle characterization might prove useful as well. At that time, fractionation of particles was achievable by capillary hydrodynamic chromatography (CHDF) and field flow fractionation (FFF) methods. The latter has proven most useful when combined with MALS to provide accurate differential number fraction size distributions for a broad range of particle classes. The MALS/FFF combination provides unique advantages and precision relative to FFF, photon correlation spectroscopy, and CHDF techniques used alone. For many classes of particles, resolution of the MALS/FFF combination far exceeds that of TEM measurements. Copyright 1998 Academic Press. Copyright 1998Academic Press
NASA Astrophysics Data System (ADS)
Doisneau, François; Arienti, Marco; Oefelein, Joseph C.
2017-01-01
For sprays, as described by a kinetic disperse phase model strongly coupled to the Navier-Stokes equations, the resolution strategy is constrained by accuracy objectives, robustness needs, and the computing architecture. In order to leverage the good properties of the Eulerian formalism, we introduce a deterministic particle-based numerical method to solve transport in physical space, which is simple to adapt to the many types of closures and moment systems. The method is inspired by the semi-Lagrangian schemes, developed for Gas Dynamics. We show how semi-Lagrangian formulations are relevant for a disperse phase far from equilibrium and where the particle-particle coupling barely influences the transport; i.e., when particle pressure is negligible. The particle behavior is indeed close to free streaming. The new method uses the assumption of parcel transport and avoids to compute fluxes and their limiters, which makes it robust. It is a deterministic resolution method so that it does not require efforts on statistical convergence, noise control, or post-processing. All couplings are done among data under the form of Eulerian fields, which allows one to use efficient algorithms and to anticipate the computational load. This makes the method both accurate and efficient in the context of parallel computing. After a complete verification of the new transport method on various academic test cases, we demonstrate the overall strategy's ability to solve a strongly-coupled liquid jet with fine spatial resolution and we apply it to the case of high-fidelity Large Eddy Simulation of a dense spray flow. A fuel spray is simulated after atomization at Diesel engine combustion chamber conditions. The large, parallel, strongly coupled computation proves the efficiency of the method for dense, polydisperse, reacting spray flows.
A direct force model for Galilean invariant lattice Boltzmann simulation of fluid-particle flows
NASA Astrophysics Data System (ADS)
Tao, Shi; He, Qing; Chen, Baiman; Yang, Xiaoping; Huang, Simin
The lattice Boltzmann method (LBM) has been widely used in the simulation of particulate flows involving complex moving boundaries. Due to the kinetic background of LBM, the bounce-back (BB) rule and the momentum exchange (ME) method can be easily applied to the solid boundary treatment and the evaluation of fluid-solid interaction force, respectively. However, recently it has been found that both the BB and ME schemes may violate the principle of Galilean invariance (GI). Some modified BB and ME methods have been proposed to reduce the GI error. But these remedies have been recognized subsequently to be inconsistent with Newton’s Third Law. Therefore, contrary to those corrections based on the BB and ME methods, a unified iterative approach is adopted to handle the solid boundary in the present study. Furthermore, a direct force (DF) scheme is proposed to evaluate the fluid-particle interaction force. The methods preserve the efficiency of the BB and ME schemes, and the performance on the accuracy and GI is verified and validated in the test cases of particulate flows with freely moving particles.
Corvari, Vincent; Narhi, Linda O; Spitznagel, Thomas M; Afonina, Nataliya; Cao, Shawn; Cash, Patricia; Cecchini, Irene; DeFelippis, Michael R; Garidel, Patrick; Herre, Andrea; Koulov, Atanas V; Lubiniecki, Tony; Mahler, Hanns-Christian; Mangiagalli, Paolo; Nesta, Douglas; Perez-Ramirez, Bernardo; Polozova, Alla; Rossi, Mara; Schmidt, Roland; Simler, Robert; Singh, Satish; Weiskopf, Andrew; Wuchner, Klaus
2015-11-01
Measurement and characterization of subvisible particles (including proteinaceous and non-proteinaceous particulate matter) is an important aspect of the pharmaceutical development process for biotherapeutics. Health authorities have increased expectations for subvisible particle data beyond criteria specified in the pharmacopeia and covering a wider size range. In addition, subvisible particle data is being requested for samples exposed to various stress conditions and to support process/product changes. Consequently, subvisible particle analysis has expanded beyond routine testing of finished dosage forms using traditional compendial methods. Over the past decade, advances have been made in the detection and understanding of subvisible particle formation. This article presents industry case studies to illustrate the implementation of strategies for subvisible particle analysis as a characterization tool to assess the nature of the particulate matter and applications in drug product development, stability studies and post-marketing changes. Copyright © 2015 The International Alliance for Biological Standardization. Published by Elsevier Ltd. All rights reserved.
Tougas, Terrence P; Goodey, Adrian P; Hardwell, Gareth; Mitchell, Jolyon; Lyapustina, Svetlana
2017-02-01
The performance of two quality control (QC) tests for aerodynamic particle size distributions (APSD) of orally inhaled drug products (OIPs) is compared. One of the tests is based on the fine particle dose (FPD) metric currently expected by the European regulators. The other test, called efficient data analysis (EDA), uses the ratio of large particle mass to small particle mass (LPM/SPM), along with impactor sized mass (ISM), to detect changes in APSD for QC purposes. The comparison is based on analysis of APSD data from four products (two different pressurized metered dose inhalers (MDIs) and two dry powder inhalers (DPIs)). It is demonstrated that in each case, EDA is able to detect shifts and abnormalities that FPD misses. The lack of sensitivity on the part of FPD is due to its "aggregate" nature, since FPD is a univariate measure of all particles less than about 5 μm aerodynamic diameter, and shifts or changes within the range encompassed by this metric may go undetected. EDA is thus shown to be superior to FPD for routine control of OIP quality. This finding augments previously reported superiority of EDA compared with impactor stage groupings (favored by US regulators) for incorrect rejections (type I errors) when incorrect acceptances (type II errors) were adjusted to the same probability for both approaches. EDA is therefore proposed as a method of choice for routine quality control of OIPs in both European and US regulatory environments.
Traffic emission factors of ultrafine particles: effects from ambient air.
Janhäll, Sara; Molnar, Peter; Hallquist, Mattias
2012-09-01
Ultrafine particles have a significant detrimental effect on both human health and climate. In order to abate this problem, it is necessary to identify the sources of ultrafine particles. A parameterisation method is presented for estimating the levels of traffic-emitted ultrafine particles in terms of variables describing the ambient conditions. The method is versatile and could easily be applied to similar datasets in other environments. The data used were collected during a four-week period in February 2005, in Gothenburg, as part of the Göte-2005 campaign. The specific variables tested were temperature (T), relative humidity (RH), carbon monoxide concentration (CO), and the concentration of particles up to 10 μm diameter (PM(10)); all indicators are of importance for aerosol processes such as coagulation and gas-particle partitioning. These variables were selected because of their direct effect on aerosol processes (T and RH) or as proxies for aerosol surface area (CO and PM(10)) and because of their availability in local monitoring programmes, increasing the usability of the parameterization. Emission factors are presented for 10-100 nm particles (ultrafine particles; EF(ufp)), for 10-40 nm particles (EF(10-40)), and for 40-100 nm particles (EF(40-100)). For EF(40-100) no effect of ambient conditions was found. The emission factor equations are calculated based on an emission factor for NO(x) of 1 g km(-1), thus the particle emission factors are easily expressed in units of particles per gram of NO(x) emitted. For 10-100 nm particles the emission factor is EF(ufp) = 1.8 × 10(15) × (1 - 0.095 × CO - 3.2 × 10(-3) × T) particles km(-1). Alternative equations for the EFs in terms of T and PM(10) concentration are also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Jianyuan; Liu, Jian; He, Yang
Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint http://arxiv.org/abs/arXiv:1505.06076 (2015)], which produces five exactlymore » soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave.« less
Fouad, Anthony; Pfefer, T. Joshua; Chen, Chao-Wei; Gong, Wei; Agrawal, Anant; Tomlins, Peter H.; Woolliams, Peter D.; Drezek, Rebekah A.; Chen, Yu
2014-01-01
Point spread function (PSF) phantoms based on unstructured distributions of sub-resolution particles in a transparent matrix have been demonstrated as a useful tool for evaluating resolution and its spatial variation across image volumes in optical coherence tomography (OCT) systems. Measurements based on PSF phantoms have the potential to become a standard test method for consistent, objective and quantitative inter-comparison of OCT system performance. Towards this end, we have evaluated three PSF phantoms and investigated their ability to compare the performance of four OCT systems. The phantoms are based on 260-nm-diameter gold nanoshells, 400-nm-diameter iron oxide particles and 1.5-micron-diameter silica particles. The OCT systems included spectral-domain and swept source systems in free-beam geometries as well as a time-domain system in both free-beam and fiberoptic probe geometries. Results indicated that iron oxide particles and gold nanoshells were most effective for measuring spatial variations in the magnitude and shape of PSFs across the image volume. The intensity of individual particles was also used to evaluate spatial variations in signal intensity uniformity. Significant system-to-system differences in resolution and signal intensity and their spatial variation were readily quantified. The phantoms proved useful for identification and characterization of irregularities such as astigmatism. Our multi-system results provide evidence of the practical utility of PSF-phantom-based test methods for quantitative inter-comparison of OCT system resolution and signal uniformity. PMID:25071949
Electron microscopic investigation and elemental analysis of titanium dioxide in sun lotion.
Sysoltseva, M; Winterhalter, R; Wochnik, A S; Scheu, C; Fromme, H
2017-06-01
The objective of this research was to determine the size, shape and aggregation of titanium dioxide (TiO 2 ) particles which are used in sun lotion as UV-blocker. Overall, six sunscreens from various suppliers and two reference substances were analysed by electron microscopy (EM) techniques in combination with energy dispersive X-ray spectroscopy (EDS). Because of a high fat content in sun lotion, it was impossible to visualize the TiO 2 particles without previous EM sample preparation. Different defatting methods for TiO 2 from sun screens were tested. A novel sample preparation method was developed which allowed the characterization of TiO 2 particles with the help of EM and EDS. Aggregates of titanium dioxide with the size of primary particles varying between 15 and 40 nm were observed only in five products. In the sun lotion with the highest SPF, only few small aggregates were found. In the sun screen with the lowest SPF, the largest aggregates of TiO 2 particles were detected with sizes up to 1.6 μm. In one of the sun lotions, neither TiO 2 nor ZnO was found in spite of the labelling. Instead, approx. 500 nm large diamond-shaped particles were observed. These particles are composed of an organic material as only carbon was detected by EDS. A novel defatting method for sample preparation of titanium dioxide nanoparticles used in sun cosmetics was developed. This method was applied to six different sun lotions with SPF between 30 and 50+. TiO 2 particles were found in only five sunscreens. The sizes of the primary particles were below 100 nm and, according to the EU Cosmetic Regulation, have to be listed on the package with the term 'nano'. © 2016 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Sato, Tatsuhiko; Kase, Yuki; Watanabe, Ritsuko; Niita, Koji; Sihver, Lembit
2009-01-01
Microdosimetric quantities such as lineal energy, y, are better indexes for expressing the RBE of HZE particles in comparison to LET. However, the use of microdosimetric quantities in computational dosimetry is severely limited because of the difficulty in calculating their probability densities in macroscopic matter. We therefore improved the particle transport simulation code PHITS, providing it with the capability of estimating the microdosimetric probability densities in a macroscopic framework by incorporating a mathematical function that can instantaneously calculate the probability densities around the trajectory of HZE particles with a precision equivalent to that of a microscopic track-structure simulation. A new method for estimating biological dose, the product of physical dose and RBE, from charged-particle therapy was established using the improved PHITS coupled with a microdosimetric kinetic model. The accuracy of the biological dose estimated by this method was tested by comparing the calculated physical doses and RBE values with the corresponding data measured in a slab phantom irradiated with several kinds of HZE particles. The simulation technique established in this study will help to optimize the treatment planning of charged-particle therapy, thereby maximizing the therapeutic effect on tumors while minimizing unintended harmful effects on surrounding normal tissues.
NASA Astrophysics Data System (ADS)
Pál, Edit; Hornok, Viktória; Kun, Robert; Chernyshev, Vladimir; Seemann, Torben; Dékány, Imre; Busse, Matthias
2012-08-01
Zinc oxide particles with different morphologies were prepared by hydrothermal method at 60-90 °C. The structure formation was controlled by the addition rate and temperature of hydrolyzing agent, while the particles size (10 nm-2.5 μm) was influenced by the preparation (hydrothermal) temperature. Scanning electron microscopy studies showed that raspberry-, prism- and flower-like ZnO particles were prepared, whose average size decreased with increasing reaction temperature. X-ray diffraction investigations confirmed that ZnO particles with hexagonal crystal structure formed in all syntheses. The raspberry-, prism- and flower-like ZnO particles showed a weak UV-emission in the range of 390-395 nm and strong visible emission with a maximum at 586, 593 and 598 nm, respectively. Morphology effect on electrical and water vapour sensing properties of ZnO samples was investigated by impedance spectroscopy and quartz crystal microbalance, respectively. The absolute impedance of raspberry-, prism- and flower-like ZnO particles was found to be strong dependent on the morphology. Space-charge-limited conductivity transport mechanism was proved by the oscillatory behaviour of impedance. Humidity sensor tests also revealed morphology and specific surface area dependency on the sensitivity and water vapour adsorption property.
Design and evaluation of an inlet conditioner to dry particles for real-time particle sizers.
Peters, Thomas M; Riss, Adam L; Holm, Ricky L; Singh, Manisha; Vanderpool, Robert W
2008-04-01
Real-time particle sizers provide rapid information about atmospheric particles, particularly peak exposures, which may be important in the development of adverse health outcomes. However, these instruments are subject to erroneous readings in high-humidity environments when compared with measurements from filter-based, federal reference method (FRM) samplers. Laboratory tests were conducted to evaluate the ability of three inlet conditioners to dry aerosol prior to entering a real-time particle sizer for measuring coarse aerosols (Model 3321 Aerodynamic Particle Sizer, APS) under simulated highly humid conditions. Two 30 day field studies in Birmingham, AL, USA were conducted to compare the response of two APSs operated with and without an inlet conditioner to that measured with FRM samplers. In field studies, the correlation of PM(10-2.5) derived from the APS and that measured with the FRM was substantially stronger with an inlet conditioner applied (r2 ranged from 0.91 to 0.99) than with no conditioner (r2 = 0.61). Laboratory experiments confirmed the ability of the heater and desiccant conditioner to remove particle-borne moisture. In field tests, water was found associated with particles across the sizing range of the APS (0.5 microm to 20 microm) when relative humidity was high in Birmingham. Certain types of inlet conditioners may substantially improve the correlation between particulate mass concentration derived from real-time particle sizers and filter-based samplers in humid conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Maoyuan; Besford, Quinn Alexander; Mulvaney, Thomas
The entropy of hydrophobic solvation has been explained as the result of ordered solvation structures, of hydrogen bonds, of the small size of the water molecule, of dispersion forces, and of solvent density fluctuations. We report a new approach to the calculation of the entropy of hydrophobic solvation, along with tests of and comparisons to several other methods. The methods are assessed in the light of the available thermodynamic and spectroscopic information on the effects of temperature on hydrophobic solvation. Five model hydrophobes in SPC/E water give benchmark solvation entropies via Widom’s test-particle insertion method, and other methods and modelsmore » are tested against these particle-insertion results. Entropies associated with distributions of tetrahedral order, of electric field, and of solvent dipole orientations are examined. We find these contributions are small compared to the benchmark particle-insertion entropy. Competitive with or better than other theories in accuracy, but with no free parameters, is the new estimate of the entropy contributed by correlations between dipole moments. Dipole correlations account for most of the hydrophobic solvation entropy for all models studied and capture the distinctive temperature dependence seen in thermodynamic and spectroscopic experiments. Entropies based on pair and many-body correlations in number density approach the correct magnitudes but fail to describe temperature and size dependences, respectively. Hydrogen-bond definitions and free energies that best reproduce entropies from simulations are reported, but it is difficult to choose one hydrogen bond model that fits a variety of experiments. The use of information theory, scaled-particle theory, and related methods is discussed briefly. Our results provide a test of the Frank-Evans hypothesis that the negative solvation entropy is due to structured water near the solute, complement the spectroscopic detection of that solvation structure by identifying the structural feature responsible for the entropy change, and point to a possible explanation for the observed dependence on length scale. Our key results are that the hydrophobic effect, i.e. the signature, temperature-dependent, solvation entropy of nonpolar molecules in water, is largely due to a dispersion force arising from correlations between rotating permanent dipole moments, that the strength of this force depends on the Kirkwood g-factor, and that the strength of this force may be obtained exactly without simulation.« less
Lineage mapper: A versatile cell and particle tracker
NASA Astrophysics Data System (ADS)
Chalfoun, Joe; Majurski, Michael; Dima, Alden; Halter, Michael; Bhadriraju, Kiran; Brady, Mary
2016-11-01
The ability to accurately track cells and particles from images is critical to many biomedical problems. To address this, we developed Lineage Mapper, an open-source tracker for time-lapse images of biological cells, colonies, and particles. Lineage Mapper tracks objects independently of the segmentation method, detects mitosis in confluence, separates cell clumps mistakenly segmented as a single cell, provides accuracy and scalability even on terabyte-sized datasets, and creates division and/or fusion lineages. Lineage Mapper has been tested and validated on multiple biological and simulated problems. The software is available in ImageJ and Matlab at isg.nist.gov.
Extracting joint weak values with local, single-particle measurements.
Resch, K J; Steinberg, A M
2004-04-02
Weak measurement is a new technique which allows one to describe the evolution of postselected quantum systems. It appears to be useful for resolving a variety of thorny quantum paradoxes, particularly when used to study properties of pairs of particles. Unfortunately, such nonlocal or joint observables often prove difficult to measure directly in practice (for instance, in optics-a common testing ground for this technique-strong photon-photon interactions would be needed to implement an appropriate von Neumann interaction). Here we derive a general, experimentally feasible, method for extracting these joint weak values from correlations between single-particle observables.
Improved silicon nitride for advanced heat engines
NASA Technical Reports Server (NTRS)
Yeh, H. C.; Wimmer, J. M.; Huang, H. H.; Rorabaugh, M. E.; Schienle, J.; Styhr, K. H.
1985-01-01
The AiResearch Casting Company baseline silicon nitride (92 percent GTE SN-502 Si sub 3 N sub 4 plus 6 percent Y sub 2 O sub 3 plus 2 percent Al sub 2 O sub 3) was characterized with methods that included chemical analysis, oxygen content determination, electrophoresis, particle size distribution analysis, surface area determination, and analysis of the degree of agglomeration and maximum particle size of elutriated powder. Test bars were injection molded and processed through sintering at 0.68 MPa (100 psi) of nitrogen. The as-sintered test bars were evaluated by X-ray phase analysis, room and elevated temperature modulus of rupture strength, Weibull modulus, stress rupture, strength after oxidation, fracture origins, microstructure, and density from quantities of samples sufficiently large to generate statistically valid results. A series of small test matrices were conducted to study the effects and interactions of processing parameters which included raw materials, binder systems, binder removal cycles, injection molding temperatures, particle size distribution, sintering additives, and sintering cycle parameters.
This study was a side-by-side comparison of two settling evaluation methods: one traditional and one new. The project investigated whether these column tests were capable of capturing or representing the rapidly settling particles present in wet-weather flows (WWF). The report r...
Two-way coupling of magnetohydrodynamic simulations with embedded particle-in-cell simulations
NASA Astrophysics Data System (ADS)
Makwana, K. D.; Keppens, R.; Lapenta, G.
2017-12-01
We describe a method for coupling an embedded domain in a magnetohydrodynamic (MHD) simulation with a particle-in-cell (PIC) method. In this two-way coupling we follow the work of Daldorff et al. (2014) [19] in which the PIC domain receives its initial and boundary conditions from MHD variables (MHD to PIC coupling) while the MHD simulation is updated based on the PIC variables (PIC to MHD coupling). This method can be useful for simulating large plasma systems, where kinetic effects captured by particle-in-cell simulations are localized but affect global dynamics. We describe the numerical implementation of this coupling, its time-stepping algorithm, and its parallelization strategy, emphasizing the novel aspects of it. We test the stability and energy/momentum conservation of this method by simulating a steady-state plasma. We test the dynamics of this coupling by propagating plasma waves through the embedded PIC domain. Coupling with MHD shows satisfactory results for the fast magnetosonic wave, but significant distortion for the circularly polarized Alfvén wave. Coupling with Hall-MHD shows excellent coupling for the whistler wave. We also apply this methodology to simulate a Geospace Environmental Modeling (GEM) challenge type of reconnection with the diffusion region simulated by PIC coupled to larger scales with MHD and Hall-MHD. In both these cases we see the expected signatures of kinetic reconnection in the PIC domain, implying that this method can be used for reconnection studies.
Zölls, Sarah; Gregoritza, Manuel; Tantipolphan, Ruedeeporn; Wiggenhorn, Michael; Winter, Gerhard; Friess, Wolfgang; Hawe, Andrea
2013-05-01
The aim of the present study was to quantitatively assess the relevance of transparency and refractive index (RI) on protein particle analysis by the light-based techniques light obscuration (LO) and Micro-Flow Imaging (MFI). A novel method for determining the RI of protein particles was developed and provided an RI of 1.41 for protein particles from two different proteins. An increased RI of the formulation by high protein concentration and/or sugars at pharmaceutically relevant levels was shown to lead to a significant underestimation of the subvisible particle concentration determined by LO and MFI. An RI match even caused particles to become "invisible" for the system, that is, not detectable anymore by LO and MFI. To determine the influence of formulation RI on particle measurements, we suggest the use of polytetrafluoroethylene (PTFE) particles to test a specific formulation for RI effects. In case of RI influences, we recommend also using a light-independent technique such as resonant mass measurement (RMM) (Archimedes) for subvisible particle analysis in protein formulations. Copyright © 2013 Wiley Periodicals, Inc.
SPARSE—A subgrid particle averaged Reynolds stress equivalent model: testing with a priori closure
Davis, Sean L.; Sen, Oishik; Udaykumar, H. S.
2017-01-01
A Lagrangian particle cloud model is proposed that accounts for the effects of Reynolds-averaged particle and turbulent stresses and the averaged carrier-phase velocity of the subparticle cloud scale on the averaged motion and velocity of the cloud. The SPARSE (subgrid particle averaged Reynolds stress equivalent) model is based on a combination of a truncated Taylor expansion of a drag correction function and Reynolds averaging. It reduces the required number of computational parcels to trace a cloud of particles in Eulerian–Lagrangian methods for the simulation of particle-laden flow. Closure is performed in an a priori manner using a reference simulation where all particles in the cloud are traced individually with a point-particle model. Comparison of a first-order model and SPARSE with the reference simulation in one dimension shows that both the stress and the averaging of the carrier-phase velocity on the cloud subscale affect the averaged motion of the particle. A three-dimensional isotropic turbulence computation shows that only one computational parcel is sufficient to accurately trace a cloud of tens of thousands of particles. PMID:28413341
SPARSE-A subgrid particle averaged Reynolds stress equivalent model: testing with a priori closure.
Davis, Sean L; Jacobs, Gustaaf B; Sen, Oishik; Udaykumar, H S
2017-03-01
A Lagrangian particle cloud model is proposed that accounts for the effects of Reynolds-averaged particle and turbulent stresses and the averaged carrier-phase velocity of the subparticle cloud scale on the averaged motion and velocity of the cloud. The SPARSE (subgrid particle averaged Reynolds stress equivalent) model is based on a combination of a truncated Taylor expansion of a drag correction function and Reynolds averaging. It reduces the required number of computational parcels to trace a cloud of particles in Eulerian-Lagrangian methods for the simulation of particle-laden flow. Closure is performed in an a priori manner using a reference simulation where all particles in the cloud are traced individually with a point-particle model. Comparison of a first-order model and SPARSE with the reference simulation in one dimension shows that both the stress and the averaging of the carrier-phase velocity on the cloud subscale affect the averaged motion of the particle. A three-dimensional isotropic turbulence computation shows that only one computational parcel is sufficient to accurately trace a cloud of tens of thousands of particles.
Smoothed particle hydrodynamics method for evaporating multiphase flows.
Yang, Xiufeng; Kong, Song-Charng
2017-09-01
The smoothed particle hydrodynamics (SPH) method has been increasingly used for simulating fluid flows; however, its ability to simulate evaporating flow requires significant improvements. This paper proposes an SPH method for evaporating multiphase flows. The present SPH method can simulate the heat and mass transfers across the liquid-gas interfaces. The conservation equations of mass, momentum, and energy were reformulated based on SPH, then were used to govern the fluid flow and heat transfer in both the liquid and gas phases. The continuity equation of the vapor species was employed to simulate the vapor mass fraction in the gas phase. The vapor mass fraction at the interface was predicted by the Clausius-Clapeyron correlation. An evaporation rate was derived to predict the mass transfer from the liquid phase to the gas phase at the interface. Because of the mass transfer across the liquid-gas interface, the mass of an SPH particle was allowed to change. Alternative particle splitting and merging techniques were developed to avoid large mass difference between SPH particles of the same phase. The proposed method was tested by simulating three problems, including the Stefan problem, evaporation of a static drop, and evaporation of a drop impacting a hot surface. For the Stefan problem, the SPH results of the evaporation rate at the interface agreed well with the analytical solution. For drop evaporation, the SPH result was compared with the result predicted by a level-set method from the literature. In the case of drop impact on a hot surface, the evolution of the shape of the drop, temperature, and vapor mass fraction were predicted.
NASA Technical Reports Server (NTRS)
Kendall, B. R. F.
1985-01-01
Charged-particle fluxes from breakdown events were studied. Methods to measure mass spectra and total emitted flux of neutral particles were developed. The design and construction of the specialized mass spectrometer was completed. Electrical breakdowns were initiated by a movable blunt contact touching the insulating surface. The contact discharge apparatus was used for final development of two different high-speed recording systems and for measurements of the composition of the materials given off by the discharge. It was shown that intense instantaneous fluxes of neutral particles were released from the sites of electrical breakdown events. A laser micropulse mass analyzer showed that visible discoloration at breakdown sites were correllated with the presence of iron on the polymer side of the film, presumably caused by punch-through to the Inconel backing. Kapton samples irradiated by an oxygen ion beam were tested. The irradiated samples were free of surface hydrocarbon contamination but otherwise behaved in the same way as the Kapton samples tested earlier. Only the two samples exposed to oxygen ion bombardment were relatively clean. This indicates an additional variable that should be considered when testing spacecraft materials in the laboratory.
NASA Astrophysics Data System (ADS)
Zhang, Hua; Zhou, Chen; Wang, Zhili; Zhao, Shuyun; Li, Jiangnan
2015-08-01
Three different internal mixing methods (Core-Shell, Maxwell-Garnett, and Bruggeman) and one external mixing method are used to study the impact of mixing methods of black carbon (BC) with sulfate aerosol on their optical properties, radiative flux, and heating rate. The optical properties of a mixture of BC and sulfate aerosol particles are considered for three typical bands. The results show that mixing methods, the volume ratio of BC to sulfate, and relative humidity have a strong influence on the optical properties of mixed aerosols. Compared to internal mixing, external mixing underestimates the particle mass absorption coefficient by 20-70% and the particle mass scattering coefficient by up to 50%, whereas it overestimates the particle single scattering albedo by 20-50% in most cases. However, the asymmetry parameter is strongly sensitive to the equivalent particle radius, but is only weakly sensitive to the different mixing methods. Of the internal methods, there is less than 2% difference in all optical properties between the Maxwell-Garnett and Bruggeman methods in all bands; however, the differences between the Core-Shell and Maxwell-Garnett/Bruggeman methods are usually larger than 15% in the ultraviolet and visible bands. A sensitivity test is conducted with the Beijing Climate Center Radiation transfer model (BCC-RAD) using a simulated BC concentration that is typical of east-central China and a sulfate volume ratio of 75%. The results show that the internal mixing methods could reduce the radiative flux more effectively because they produce a higher absorption. The annual mean instantaneous radiative force due to BC-sulfate aerosol is about -3.18 W/m2 for the external method and -6.91 W/m2 for the internal methods at the surface, and -3.03/-1.56/-1.85 W/m2 for the external/Core-Shell/(Maxwell-Garnett/Bruggeman) methods, respectively, at the tropopause.
Development of a Radial Deconsolidation Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helmreich, Grant W.; Montgomery, Fred C.; Hunn, John D.
2015-12-01
A series of experiments have been initiated to determine the retention or mobility of fission products* in AGR fuel compacts [Petti, et al. 2010]. This information is needed to refine fission product transport models. The AGR-3/4 irradiation test involved half-inch-long compacts that each contained twenty designed-to-fail (DTF) particles, with 20-μm thick carbon-coated kernels whose coatings were deliberately fabricated such that they would crack under irradiation, providing a known source of post-irradiation isotopes. The DTF particles in these compacts were axially distributed along the compact centerline so that the diffusion of fission products released from the DTF kernels would be radiallymore » symmetric [Hunn, et al. 2012; Hunn et al. 2011; Kercher, et al. 2011; Hunn, et al. 2007]. Compacts containing DTF particles were irradiated at Idaho National Laboratory (INL) at the Advanced Test Reactor (ATR) [Collin, 2015]. Analysis of the diffusion of these various post-irradiation isotopes through the compact requires a method to radially deconsolidate the compacts so that nested-annular volumes may be analyzed for post-irradiation isotope inventory in the compact matrix, TRISO outer pyrolytic carbon (OPyC), and DTF kernels. An effective radial deconsolidation method and apparatus appropriate to this application has been developed and parametrically characterized.« less
NASA Astrophysics Data System (ADS)
Appel, J. K.; Köehler, J.; Guo, J.; Ehresmann, B.; Zeitlin, C.; Matthiä, D.; Lohf, H.; Wimmer-Schweingruber, R. F.; Hassler, D.; Brinza, D. E.; Böhm, E.; Böttcher, S.; Martin, C.; Burmeister, S.; Reitz, G.; Rafkin, S.; Posner, A.; Peterson, J.; Weigle, G.
2018-01-01
The Mars Science Laboratory rover Curiosity, operating on the surface of Mars, is exposed to radiation fluxes from above and below. Galactic Cosmic Rays travel through the Martian atmosphere, producing a modified spectrum consisting of both primary and secondary particles at ground level. These particles produce an upward directed secondary particle spectrum as they interact with the Martian soil. Here we develop a method to distinguish the upward and downward directed particle fluxes in the Radiation Assessment Detector (RAD) instrument, verify it using data taken during the cruise to Mars, and apply it to data taken on the Martian surface. We use a combination of Geant4 and Planetocosmics modeling to find discrimination criteria for the flux directions. After developing models of the cruise phase and surface shielding conditions, we compare model-predicted values for the ratio of upward to downward flux with those found in RAD observation data. Given the quality of available information on Mars Science Laboratory spacecraft and rover composition, we find generally reasonable agreement between our models and RAD observation data. This demonstrates the feasibility of the method developed and tested here. We additionally note that the method can also be used to extend the measurement range and capabilities of the RAD instrument to higher energies.
Summary of nondestructive testing theory and practice
NASA Technical Reports Server (NTRS)
Meister, R. P.; Randall, M. D.; Mitchell, D. K.; Williams, L. P.; Pattee, H. E.
1972-01-01
The ability to fabricate design critical and man-rated aerospace structures using materials near the limits of their capabilities requires a comprehensive and dependable assurance program. The quality assurance program must rely heavily on nondestructive testing methods for thorough inspection to assess properties and quality of hardware items. A survey of nondestructive testing methods is presented to provide space program managers, supervisors and engineers who are unfamiliar with this technical area with appropriate insight into the commonly accepted nondestructive testing methods available, their interrelationships, used, advantages and limitations. Primary emphasis is placed on the most common methods: liquid penetrant, magnetic particle, radiography, ultrasonics and eddy current. A number of the newer test techniques including thermal, acoustic emission, holography, microwaves, eddy-sonic and exo-electron emission, which are beginning to be used in applications of interest to NASA, are also discussed briefly.
SS-HORSE method for studying resonances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blokhintsev, L. D.; Mazur, A. I.; Mazur, I. A., E-mail: 008043@pnu.edu.ru
A new method for analyzing resonance states based on the Harmonic-Oscillator Representation of Scattering Equations (HORSE) formalism and analytic properties of partial-wave scattering amplitudes is proposed. The method is tested by applying it to the model problem of neutral-particle scattering and can be used to study resonance states on the basis of microscopic calculations performed within various versions of the shell model.
2013-06-20
Automatic Particle Counter, cleanliness, free water, Diesel 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT none 18. NUMBER OF PAGES...Governmental transfer receipts and 1.0 mg/L on issue to aircraft, or up to 10 mg/L for product used as a diesel product for ground use (1). Free...industry. The International Organization for Standardization (ISO) has published several methods and test procedures for the calibration and use of
Sol-gel methods for synthesis of aluminosilicates for dental applications.
Cestari, Alexandre
2016-12-01
Amorphous aluminosilicates glasses containing fluorine, phosphorus and calcium are used as a component of the glass ionomer dental cement. This cement is used as a restorative, basis or filling material, but presents lower mechanical resistance than resin-modified materials. The Sol-Gel method is a possible route for preparation of glasses with lower temperature and energy consumption, with higher homogeneity and with uniform and nanometric particles, compared to the industrial methods Glass ionomer cements with uniform, homogeneous and nanometric particles can present higher mechanical resistance than commercial ionomers. The aim of this work was to adapt the Sol-Gel methods to produce new aluminosilicate glass particles by non-hydrolytic, hydrolytic acid and hydrolytic basic routes, to improve glass ionomer cements characteristics. Three materials were synthesized with the same composition, to evaluate the properties of the glasses produced from the different methods, because multicomponent oxides are difficult to prepare with homogeneity. The objective was to develop a new route to produce new glass particles for ionomer cements with possible higher resistance. The particles were characterized by thermal analysis (TG, DTA, DSC), transmission electron microscopy (TEM), X-ray diffraction (XRD), infrared spectroscopy (FTIR) and scanning electron microscopy coupled with energy dispersive spectroscopy (SEM-EDS). The glasses were tested with polyacrylic acid to form the glass ionomer cement by the setting reaction. It was possible to produce distinct materials for dental applications and a sample presented superior characteristics (homogeneity, nanometric particles, and homogenous elemental distribution) than commercial glasses for ionomer cements. The new route for glass production can possible improve the mechanical resistance of the ionomer cements. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zorn, Julia; Ritter, Bärbel; Miller, Manuel; Kraus, Monika; Northrup, Emily; Brielmeier, Markus
2017-06-01
One limitation to housing rodents in individually ventilated cages (IVCs) is the ineffectiveness of traditional health monitoring programs that test soiled bedding sentinels every quarter. Aerogen transmission does not occur with this method. Moreover, the transmission of numerous pathogens in bedding is uncertain, and sentinel susceptibility to various pathogens varies. A novel method using particle collection from samples of exhaust air was developed in this study which was also systematically compared with routine health monitoring using soiled bedding sentinels. We used our method to screen these samples for the presence of murine norovirus (MNV), a mouse pathogen highly prevalent in laboratory animal facilities. Exhaust air particles from prefilters of IVC racks with known MNV prevalence were tested by quantitative reverse transcription polymerase chain reaction (RT-qPCR). MNV was detected in exhaust air as early as one week with one MNV-positive cage per rack, while sentinels discharged MNV RNA without seroconverting. MNV was reliably and repeatedly detected in particles collected from samples of exhaust air in all seven of the three-month sampling rounds, with increasing MNV prevalence, while sentinels only seroconverted in one round. Under field conditions, routine soiled bedding sentinel health monitoring in our animal facility failed to identify 67% ( n = 85) of positive samples by RT-qPCR of exhaust air particles. Thus, this method proved to be highly sensitive and superior to soiled bedding sentinels in the reliable detection of MNV. These results represent a major breakthrough in hygiene monitoring of rodent IVC systems and contribute to the 3R principles by reducing the number of animals used and by improving experimental conditions.
Multistrategy Self-Organizing Map Learning for Classification Problems
Hasan, S.; Shamsuddin, S. M.
2011-01-01
Multistrategy Learning of Self-Organizing Map (SOM) and Particle Swarm Optimization (PSO) is commonly implemented in clustering domain due to its capabilities in handling complex data characteristics. However, some of these multistrategy learning architectures have weaknesses such as slow convergence time always being trapped in the local minima. This paper proposes multistrategy learning of SOM lattice structure with Particle Swarm Optimisation which is called ESOMPSO for solving various classification problems. The enhancement of SOM lattice structure is implemented by introducing a new hexagon formulation for better mapping quality in data classification and labeling. The weights of the enhanced SOM are optimised using PSO to obtain better output quality. The proposed method has been tested on various standard datasets with substantial comparisons with existing SOM network and various distance measurement. The results show that our proposed method yields a promising result with better average accuracy and quantisation errors compared to the other methods as well as convincing significant test. PMID:21876686
Multiscale modeling of particle in suspension with smoothed dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Bian, Xin; Litvinov, Sergey; Qian, Rui; Ellero, Marco; Adams, Nikolaus A.
2012-01-01
We apply smoothed dissipative particle dynamics (SDPD) [Español and Revenga, Phys. Rev. E 67, 026705 (2003)] to model solid particles in suspension. SDPD is a thermodynamically consistent version of smoothed particle hydrodynamics (SPH) and can be interpreted as a multiscale particle framework linking the macroscopic SPH to the mesoscopic dissipative particle dynamics (DPD) method. Rigid structures of arbitrary shape embedded in the fluid are modeled by frozen particles on which artificial velocities are assigned in order to satisfy exactly the no-slip boundary condition on the solid-liquid interface. The dynamics of the rigid structures is decoupled from the solvent by solving extra equations for the rigid body translational/angular velocities derived from the total drag/torque exerted by the surrounding liquid. The correct scaling of the SDPD thermal fluctuations with the fluid-particle size allows us to describe the behavior of the particle suspension on spatial scales ranging continuously from the diffusion-dominated regime typical of sub-micron-sized objects towards the non-Brownian regime characterizing macro-continuum flow conditions. Extensive tests of the method are performed for the case of two/three dimensional bulk particle-system both in Brownian/ non-Brownian environment showing numerical convergence and excellent agreement with analytical theories. Finally, to illustrate the ability of the model to couple with external boundary geometries, the effect of confinement on the diffusional properties of a single sphere within a micro-channel is considered, and the dependence of the diffusion coefficient on the wall-separation distance is evaluated and compared with available analytical results.
Zhan, Xiaobin; Jiang, Shulan; Yang, Yili; Liang, Jian; Shi, Tielin; Li, Xiwen
2015-09-18
This paper proposes an ultrasonic measurement system based on least squares support vector machines (LS-SVM) for inline measurement of particle concentrations in multicomponent suspensions. Firstly, the ultrasonic signals are analyzed and processed, and the optimal feature subset that contributes to the best model performance is selected based on the importance of features. Secondly, the LS-SVM model is tuned, trained and tested with different feature subsets to obtain the optimal model. In addition, a comparison is made between the partial least square (PLS) model and the LS-SVM model. Finally, the optimal LS-SVM model with the optimal feature subset is applied to inline measurement of particle concentrations in the mixing process. The results show that the proposed method is reliable and accurate for inline measuring the particle concentrations in multicomponent suspensions and the measurement accuracy is sufficiently high for industrial application. Furthermore, the proposed method is applicable to the modeling of the nonlinear system dynamically and provides a feasible way to monitor industrial processes.
Consistent kinetic simulation of plasma and sputtering in low temperature plasmas
NASA Astrophysics Data System (ADS)
Schmidt, Frederik; Trieschmann, Jan; Mussenbrock, Thomas
2016-09-01
Plasmas are commonly used in sputtering applications for the deposition of thin films. Although magnetron sources are a prominent choice, capacitively coupled plasmas have certain advantages (e.g., sputtering of non-conducting and/or ferromagnetic materials, aside of excellent control of the ion energy distribution). In order to understand the collective plasma and sputtering dynamics, a kinetic simulation model is helpful. Particle-in-Cell has been proven to be successful in simulating the plasma dynamics, while the Test-Multi-Particle-Method can be used to describe the sputtered neutral species. In this talk a consistent combination of these methods is presented by consistently coupling the simulated ion flux as input to a neutral particle transport model. The combined model is used to simulate and discuss the spatially dependent densities, fluxes and velocity distributions of all particles. This work is supported by the German Research Foundation (DFG) in the frame of Transregional Collaborative Research Center (SFB) TR-87.
Stoney, David A; Bowen, Andrew M; Stoney, Paul L
2016-12-01
On the contact surfaces of footwear loosely, moderately and strongly held particle fractions were separated and analyzed in an effort to detect different particle signals. Three environmental exposure sites were chosen to have different, characteristic particle types (soil minerals). Shoes of two types (work boots and tennis shoes) were tested, accumulating particles by walking 250m in each environment. Some shoes were exposed to only one environment; others were exposed to all three, in one of six different sequences. Sampling methods were developed to separate particles from the contact surface of the shoe based on how tightly they were held to the sole. Loosely held particles were removed by walking on paper, moderately held particles were removed by electrostatic lifting, and the most tightly held particles were removed by moist swabbing. The resulting numbers and types of particles were determined using forensic microscopy. Particle profiles from the different fractions were compared to test the ability to objectively distinguish the order of exposure to the three environments. Without exception, the samples resulting from differential sampling are dominated by the third site in the sequential footwear exposures. No noticeable differences are seen among the differential samplings of the loosely, moderately and strongly held particles: the same overwhelming presence of the third site is seen. It is clear from these results (1) that the third (final) exposure results in the nearly complete removal of any particles from prior exposures, and (2) that under the experimental conditions loosely, moderately and strongly held particles are affected similarly, without any detectable enrichment of the earlier exposures among the more tightly held particles. These findings have significant implications for casework, demonstrating that particles on the contact surfaces of footwear are rapidly lost and replaced. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Skevington, Jennifer L.
2010-01-01
Charged particle sources are integral devices used by Marshall Space Flight Center s Environmental Effects Branch (EM50) in order to simulate space environments for accurate testing of materials and systems. By using these sources inside custom vacuum systems, materials can be tested to determine charging and discharging properties as well as resistance to sputter damage. This knowledge can enable scientists and engineers to choose proper materials that will not fail in harsh space environments. This paper combines the steps utilized to build a low energy electron gun (The "Skevington 3000") as well as the methods used to characterize the output of both the Skevington 3000 and a manufactured Xenon ion source. Such characterizations include beam flux, beam uniformity, and beam energy. Both sources were deemed suitable for simulating environments in future testing.
Autothermal reforming of propane over Mg-Al hydrotalcite-like catalysts.
Lim, You-Soon; Park, Nam-Cook; Shin, Jae-Soon; Kim, Jong-Ho; Moon, Dong-Ju; Kim, Young-Chul
2008-10-01
The performance of hydrotalcite-like catalysts in propane autothermal reforming for hydrogen production was studied in fixed-bed flow reactor. Hydrotalcite-like catalysts were synthesized by coprecipitation and modified co-precipitation by the impregnation method and those were promoted by the addition of noble metals. Reaction test results indicated that hydrotalcite-like catalysts of modified method were showed higher H2-yield than co-precipitation method because surface Ni particles of catalysts by modified method were more abundant. When added noble metals, the activity was enhanced because the size of nickel particles was decreased and degree of dispersion was increased. Also the carbon deposit is low after the reaction. When solvent of solution was changed, activity was increased. It is because degree of dispersion was increased.
Lee, Mong-Chuan; Lin, Yen-Hui; Yu, Huang-Wei
2014-11-01
A mathematical model system was derived to describe the kinetics of ammonium nitrification in a fixed biofilm reactor using dewatered sludge-fly ash composite ceramic particle as a supporting medium. The model incorporates diffusive mass transport and Monod kinetics. The model was solved using a combination of the orthogonal collocation method and Gear's method. A batch test was conducted to observe the nitrification of ammonium-nitrogen ([Formula: see text]-N) and the growth of nitrifying biomass. The compositions of nitrifying bacterial community in the batch kinetic test were analyzed using PCR-DGGE method. The experimental results show that the most staining intensity abundance of bands occurred on day 2.75 with the highest biomass concentration of 46.5 mg/L. Chemostat kinetic tests were performed independently to evaluate the biokinetic parameters used in the model prediction. In the column test, the removal efficiency of [Formula: see text]-N was approximately 96 % while the concentration of suspended nitrifying biomass was approximately 16 mg VSS/L and model-predicted biofilm thickness reached up to 0.21 cm in the steady state. The profiles of denaturing gradient gel electrophoresis (DGGE) of different microbial communities demonstrated that indigenous nitrifying bacteria (Nitrospira and Nitrobacter) existed and were the dominant species in the fixed biofilm process.
NASA Astrophysics Data System (ADS)
Japuntich, Daniel A.; Franklin, Luke M.; Pui, David Y.; Kuehn, Thomas H.; Kim, Seong Chan; Viner, Andrew S.
2007-01-01
Two different air filter test methodologies are discussed and compared for challenges in the nano-sized particle range of 10-400 nm. Included in the discussion are test procedure development, factors affecting variability and comparisons between results from the tests. One test system which gives a discrete penetration for a given particle size is the TSI 8160 Automated Filter tester (updated and commercially available now as the TSI 3160) manufactured by the TSI, Inc., Shoreview, MN. Another filter test system was developed utilizing a Scanning Mobility Particle Sizer (SMPS) to sample the particle size distributions downstream and upstream of an air filter to obtain a continuous percent filter penetration versus particle size curve. Filtration test results are shown for fiberglass filter paper of intermediate filtration efficiency. Test variables affecting the results of the TSI 8160 for NaCl and dioctyl phthalate (DOP) particles are discussed, including condensation particle counter stability and the sizing of the selected particle challenges. Filter testing using a TSI 3936 SMPS sampling upstream and downstream of a filter is also shown with a discussion of test variables and the need for proper SMPS volume purging and filter penetration correction procedure. For both tests, the penetration versus particle size curves for the filter media studied follow the theoretical Brownian capture model of decreasing penetration with decreasing particle diameter down to 10 nm with no deviation. From these findings, the authors can say with reasonable confidence that there is no evidence of particle thermal rebound in the size range.
Bocalon, Anne C E; Mita, Daniela; Narumyia, Isabela; Shouha, Paul; Xavier, Tathy A; Braga, Roberto Ruggiero
2016-09-01
To test the null hypothesis that the replacement of a small fraction of glass particles with random short glass fibers does not affect degree of conversion (DC), flexural strength (FS), fracture toughness (FT) and post-gel polymerization shrinkage (PS) of experimental composites. Four experimental photocurable composites containing 1 BisGMA:1 TEGDMA (by weight) and 60vol% of fillers were prepared. The reinforcing phase was constituted by barium glass particles (2μm) and 0%, 2.5%, 5.0% or 7.5% of silanated glass fibers (1.4mm in length, 7-13μm in diameter). DC (n=4) was obtained using near-FTIR. FS (n=10) was calculated via biaxial flexural test and FT (n=10) used the "single edge notched beam" method. PS at 5min (n=8) was determined using the strain gage method. Data were analyzed by ANOVA/Tukey test (DC, FS, PS) or Kruskal-Wallis/Dunn's test (FT, alpha: 5% for both tests). DC was similar among groups (p>0.05). Only the composite containing 5.0% of fibers presented lower FS than the control (p<0.001). FT increased significantly between the control (1.3±0.17MPam(0.5)) and the composites containing either 5.0% (2.7±0.6MPam(0.5)) or 7.5% of fibers (2.8±0.6MPam(0.5), p<0.001). PS in relation to control was significantly reduced at 2.5% fibers (from 0.81±0.13% to 0.57±0.13%) and further reduced between 5.0% and 7.5% (from 0.42±0.12% to 0.23±0.07%, p<0.001). The replacement of a small fraction of filler particles with glass fibers significantly increased fracture toughness and reduced post-gel shrinkage of experimental composites. Copyright © 2016 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Feenstra, B. J.; Polidori, A.; Tisopulos, L.; Papapostolou, V.; Zhang, H.; Pathmanabhan, J.
2016-12-01
In recent years great progress has been made in development of low-cost miniature air quality sensing technologies. Such low-cost sensors offer a prospect of providing a real-time spatially dense information on pollutants, however, the quality of the data produced by these sensors is so far untested. In an effort to inform the general public about the actual performance of commercially available low-cost air quality sensors, in June 2014 the South Coast Air Quality Management District (SCAQMD) has established the Air Quality Sensor Performance Evaluation Center (AQ-SPEC). This program performs a thorough characterization of low-cost sensors under ambient (in the field) and controlled (in the laboratory) conditions. During the field testing, air quality sensors are operated side-by-side with Federal Reference Methods and Federal Equivalent Methods (FRM and FEM, respectively), which are routinely used to measure the ambient concentration of gaseous or particle pollutants for regulatory purposes. Field testing is conducted at two of SCAQMD's existing air monitoring stations, one in Rubidoux and one near the I-710 freeway. Sensors that demonstrate an acceptable performance in the field are brought back to the lab where a "characterization chamber" is used to challenge these devices with known concentrations of different particle and gaseous pollutants under different temperature and relative humidity levels. Testing results for each sensor are then summarized in a technical report and, along with other relevant information, posted online on a dedicated website (www.aqmd.gov/aq-spec) to educate the public about the capabilities of commercially available sensors and their potential applications. During this presentation, the results from two years of field and laboratory testing will be presented. The major strengths and weaknesses of some of the most commonly available particle and gaseous sensors will be discussed.
Particle-based solid for nonsmooth multidomain dynamics
NASA Astrophysics Data System (ADS)
Nordberg, John; Servin, Martin
2018-04-01
A method for simulation of elastoplastic solids in multibody systems with nonsmooth and multidomain dynamics is developed. The solid is discretised into pseudo-particles using the meshfree moving least squares method for computing the strain tensor. The particle's strain and stress tensor variables are mapped to a compliant deformation constraint. The discretised solid model thus fit a unified framework for nonsmooth multidomain dynamics simulations including rigid multibodies with complex kinematic constraints such as articulation joints, unilateral contacts with dry friction, drivelines, and hydraulics. The nonsmooth formulation allows for impact impulses to propagate instantly between the rigid multibody and the solid. Plasticity is introduced through an associative perfectly plastic modified Drucker-Prager model. The elastic and plastic dynamics are verified for simple test systems, and the capability of simulating tracked terrain vehicles driving on a deformable terrain is demonstrated.
Smoothed particle hydrodynamics method from a large eddy simulation perspective
NASA Astrophysics Data System (ADS)
Di Mascio, A.; Antuono, M.; Colagrossi, A.; Marrone, S.
2017-03-01
The Smoothed Particle Hydrodynamics (SPH) method, often used for the modelling of the Navier-Stokes equations by a meshless Lagrangian approach, is revisited from the point of view of Large Eddy Simulation (LES). To this aim, the LES filtering procedure is recast in a Lagrangian framework by defining a filter that moves with the positions of the fluid particles at the filtered velocity. It is shown that the SPH smoothing procedure can be reinterpreted as a sort of LES Lagrangian filtering, and that, besides the terms coming from the LES convolution, additional contributions (never accounted for in the SPH literature) appear in the equations when formulated in a filtered fashion. Appropriate closure formulas are derived for the additional terms and a preliminary numerical test is provided to show the main features of the proposed LES-SPH model.
An automatic, stagnation point based algorithm for the delineation of Wellhead Protection Areas
NASA Astrophysics Data System (ADS)
Tosco, Tiziana; Sethi, Rajandrea; di Molfetta, Antonio
2008-07-01
Time-related capture areas are usually delineated using the backward particle tracking method, releasing circles of equally spaced particles around each well. In this way, an accurate delineation often requires both a very high number of particles and a manual capture zone encirclement. The aim of this work was to propose an Automatic Protection Area (APA) delineation algorithm, which can be coupled with any model of flow and particle tracking. The computational time is here reduced, thanks to the use of a limited number of nonequally spaced particles. The particle starting positions are determined coupling forward particle tracking from the stagnation point, and backward particle tracking from the pumping well. The pathlines are postprocessed for a completely automatic delineation of closed perimeters of time-related capture zones. The APA algorithm was tested for a two-dimensional geometry, in homogeneous and nonhomogeneous aquifers, steady state flow conditions, single and multiple wells. Results show that the APA algorithm is robust and able to automatically and accurately reconstruct protection areas with a very small number of particles, also in complex scenarios.
Dittmer, W U; de Kievit, P; Prins, M W J; Vissers, J L M; Mersch, M E C; Martens, M F W C
2008-09-30
A rapid method for the sensitive detection of proteins using actuated magnetic particle labels, which are measured with a giant magneto-resistive (GMR) biosensor, is described. The technique involves a 1-step sandwich immunoassay with no fluid replacement steps. The various assay binding reactions as well as the bound/free separation are entirely controlled by magnetic forces induced by electromagnets above and below the sensor chip. During the assay, particles conjugated with tracer antibodies are actuated through the sample for target capture, and rapidly brought to the sensor surface where they bind to immobilized capture antibodies. Weakly or unbound labels are removed with a magnetic force oriented away from the GMR sensor surface. For the measurement of parathyroid hormone (PTH), a detection limit in the 10 pM range is obtained with a total assay time of 15 min when 300 nm particles are used. The same sensitivity can be achieved in 5 min when 500 nm particles are used. If 500 nm particles are employed in a 15-minute assay, then 0.8 pM of PTH is detectable. The low sample volume, high analytical performance and high speed of the test coupled with the compact GMR biosensor make the system especially suitable for sensitive testing outside of laboratory environments.
Evaluation of new collision-pair selection models in DSMC
NASA Astrophysics Data System (ADS)
Akhlaghi, Hassan; Roohi, Ehsan
2017-10-01
The current paper investigates new collision-pair selection procedures in a direct simulation Monte Carlo (DSMC) method. Collision partner selection based on the random procedure from nearest neighbor particles and deterministic selection of nearest neighbor particles have already been introduced as schemes that provide accurate results in a wide range of problems. In the current research, new collision-pair selections based on the time spacing and direction of the relative movement of particles are introduced and evaluated. Comparisons between the new and existing algorithms are made considering appropriate test cases including fluctuations in homogeneous gas, 2D equilibrium flow, and Fourier flow problem. Distribution functions for number of particles and collisions in cell, velocity components, and collisional parameters (collision separation, time spacing, relative velocity, and the angle between relative movements of particles) are investigated and compared with existing analytical relations for each model. The capability of each model in the prediction of the heat flux in the Fourier problem at different cell numbers, numbers of particles, and time steps is examined. For new and existing collision-pair selection schemes, the effect of an alternative formula for the number of collision-pair selections and avoiding repetitive collisions are investigated via the prediction of the Fourier heat flux. The simulation results demonstrate the advantages and weaknesses of each model in different test cases.
Measuring droplet size distributions from overlapping interferometric particle images.
Bocanegra Evans, Humberto; Dam, Nico; van der Voort, Dennis; Bertens, Guus; van de Water, Willem
2015-02-01
Interferometric particle imaging provides a simple way to measure the probability density function (PDF) of droplet sizes from out-focus images. The optical setup is straightforward, but the interpretation of the data is a problem when particle images overlap. We propose a new way to analyze the images. The emphasis is not on a precise identification of droplets, but on obtaining a good estimate of the PDF of droplet sizes in the case of overlapping particle images. The algorithm is tested using synthetic and experimental data. We next use these methods to measure the PDF of droplet sizes produced by spinning disk aerosol generators. The mean primary droplet diameter agrees with predictions from the literature, but we find a broad distribution of satellite droplet sizes.
NASA Astrophysics Data System (ADS)
Park, Kwangsoo
In this dissertation, a research effort aimed at development and implementation of a direct field test method to evaluate the linear and nonlinear shear modulus of soil is presented. The field method utilizes a surface footing that is dynamically loaded horizontally. The test procedure involves applying static and dynamic loads to the surface footing and measuring the soil response beneath the loaded area using embedded geophones. A wide range in dynamic loads under a constant static load permits measurements of linear and nonlinear shear wave propagation from which shear moduli and associated shearing strains are evaluated. Shear wave velocities in the linear and nonlinear strain ranges are calculated from time delays in waveforms monitored by geophone pairs. Shear moduli are then obtained using the shear wave velocities and the mass density of a soil. Shear strains are determined using particle displacements calculated from particle velocities measured at the geophones by assuming a linear variation between geophone pairs. The field test method was validated by conducting an initial field experiment at sandy site in Austin, Texas. Then, field experiments were performed on cemented alluvium, a complex, hard-to-sample material. Three separate locations at Yucca Mountain, Nevada were tested. The tests successfully measured: (1) the effect of confining pressure on shear and compression moduli in the linear strain range and (2) the effect of strain on shear moduli at various states of stress in the field. The field measurements were first compared with empirical relationships for uncemented gravel. This comparison showed that the alluvium was clearly cemented. The field measurements were then compared to other independent measurements including laboratory resonant column tests and field seismic tests using the spectral-analysis-of-surface-waves method. The results from the field tests were generally in good agreement with the other independent test results, indicating that the proposed method has the ability to directly evaluate complex material like cemented alluvium in the field.
NASA Astrophysics Data System (ADS)
Frederickson, Lee Thomas
Much of combustion research focuses on reducing soot particulates in emissions. However, current research at San Diego State University (SDSU) Combustion and Solar Energy Laboratory (CSEL) is underway to develop a high temperature solar receiver which will utilize carbon nanoparticles as a solar absorption medium. To produce carbon nanoparticles for the small particle heat exchange receiver (SPHER), a lab-scale carbon particle generator (CPG) has been built and tested. The CPG is a heated ceramic tube reactor with a set point wall temperature of 1100-1300°C operating at 5-6 bar pressure. Natural gas and nitrogen are fed to the CPG where natural gas undergoes pyrolysis resulting in carbon particles. The gas-particle mixture is met downstream with dilution air and sent to the lab scale solar receiver. To predict soot yield and general trends in CPG performance, a model has been setup in Reaction Design CHEMKIN-PRO software. One of the primary goals of this research is to accurately measure particle properties. Mean particle diameter, size distribution, and index of refraction are calculated using Scanning Electron Microscopy (SEM) and a Diesel Particulate Scatterometer (DPS). Filter samples taken during experimentation are analyzed to obtain a particle size distribution with SEM images processed in ImageJ software. These results are compared with the DPS, which calculates the particle size distribution and the index of refraction from light scattering using Mie theory. For testing with the lab scale receiver, a particle diameter range of 200-500 nm is desired. Test conditions are varied to understand effects of operating parameters on particle size and the ability to obtain the size range. Analysis of particle loading is the other important metric for this research. Particle loading is measured downstream of the CPG outlet and dilution air mixing point. The air-particle mixture flows through an extinction tube where opacity of the mixture is measured with a 532 nm laser and detector. Beer's law is then used to calculate particle loading. The CPG needs to produce a certain particle loading for a corresponding receiver test. By obtaining the particle loading in the system, the reaction conversion to solid carbon in the CPG can be calculated to measure the efficiency of the CPG. To predict trends in reaction conversion and particle size from experimentation, the CHEMKIN-PRO computer model for the CPG is run for various flow rates and wall temperature profiles. These predictions were a reason for testing at higher wall set point temperatures. Based on these research goals, it was shown that the CPG consistently produces a mean particle diameter of 200-400 nm at the conditions tested, fitting perfectly inside the desired range. This led to successful lab scale SPHER testing which produced a 10-point efficiency increase and 150°C temperature difference with particles present. Also, at 3 g/s dilution air flow rate, an efficiency of 80% at an outlet temperature above 800°C was obtained. Promise was shown at higher CPG experimental temperatures to produce higher reaction conversion, both experimentally and in the model. However, based on wall temperature data taken during experimentation, it is apparent that the CPG needs to have multiple heating zones with separate temperature controllers in order to have an isothermal zone rather than a parabolic temperature profile. As for the computer model, it predicted much higher reaction conversion at higher temperature. The mass fraction of fuel in the inlet stream was shown to not affect conversion while increasing residence time led to increasing conversion. Particle size distribution in the model was far off and showed a bimodal distribution for one of the statistical methods. Using the results from experimentation and modeling, a preliminary CPG design is presented that will operate in a 5MWth receiver system.
Reduced projection angles for binary tomography with particle aggregation.
Al-Rifaie, Mohammad Majid; Blackwell, Tim
This paper extends particle aggregate reconstruction technique (PART), a reconstruction algorithm for binary tomography based on the movement of particles. PART supposes that pixel values are particles, and that particles diffuse through the image, staying together in regions of uniform pixel value known as aggregates. In this work, a variation of this algorithm is proposed and a focus is placed on reducing the number of projections and whether this impacts the reconstruction of images. The algorithm is tested on three phantoms of varying sizes and numbers of forward projections and compared to filtered back projection, a random search algorithm and to SART, a standard algebraic reconstruction method. It is shown that the proposed algorithm outperforms the aforementioned algorithms on small numbers of projections. This potentially makes the algorithm attractive in scenarios where collecting less projection data are inevitable.
NASA Astrophysics Data System (ADS)
Guerrero, Massimo; Di Federico, Vittorio
2018-03-01
The use of acoustic techniques has become common for estimating suspended sediment in water environments. An emitted beam propagates into water producing backscatter and attenuation, which depend on scattering particles concentration and size distribution. Unfortunately, the actual particles size distribution (PSD) may largely affect the accuracy of concentration quantification through the unknown coefficients of backscattering strength, ks2, and normalized attenuation, ζs. This issue was partially solved by applying the multi-frequency approach. Despite this possibility, a relevant scientific and practical question remains regarding the possibility of using acoustic methods to investigate poorly sorted sediment in the spectrum ranging from clay to fine sand. The aim of this study is to investigate the possibility of combining the measurement of sound attenuation and backscatter to determine ζs for the suspended particles and the corresponding concentration. The proposed method is moderately dependent from actual PSD, thus relaxing the need of frequent calibrations to account for changes in ks2 and ζs coefficients. Laboratory tests were conducted under controlled conditions to validate this measurement technique. With respect to existing approaches, the developed method more accurately estimates the concentration of suspended particles ranging from clay to fine sand and, at the same time, gives an indication on their actual PSD.
FAITH – Fast Assembly Inhibitor Test for HIV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadravová, Romana; Rumlová, Michaela, E-mail: michaela.rumlova@vscht.cz; Department of Biotechnology, University of Chemistry and Technology, Prague, Technická 5, 166 28 Prague
Due to the high number of drug-resistant HIV-1 mutants generated by highly active antiretroviral therapy (HAART), there is continuing demand for new types of inhibitors. Both the assembly of the Gag polyprotein into immature and mature HIV-1 particles are attractive candidates for the blocking of the retroviral life cycle. Currently, no therapeutically-used assembly inhibitor is available. One possible explanation is the lack of a reliable and simple assembly inhibitor screening method. To identify compounds potentially inhibiting the formation of both types of HIV-1 particles, we developed a new fluorescent high-throughput screening assay. This assay is based on the quantification ofmore » the assembly efficiency in vitro in a 96-well plate format. The key components of the assay are HIV-1 Gag-derived proteins and a dual-labelled oligonucleotide, which emits fluorescence only when the assembly of retroviral particles is inhibited. The method was validated using three (CAI, BM2, PF74) reported assembly inhibitors. - Highlights: • Allows screening of assembly inhibitors of both mature and immature HIV-1 particles. • Based on Gag-derived proteins with CA in mature or immature conformation. • Simple and sensitive method suitable for high-throughput screening of inhibitors. • Unlike in other HIV assembly methods, works under physiological conditions. • No washing steps are necessary.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Shi'ang
Primary particles formed in as-cast Al-5Mg-0.6Sc alloy and their role in microstructure and mechanical properties of the alloy were investigated using optical microscopy (OM), scanning electron microscopy (SEM), electron back-scatter diffraction (EBSD) and tensile testing. It was found that primary particles due to a close orientation to matrix could serve as the potent heterogeneous nucleation sites for α-Al during solidification and thus impose a remarkable grain refinement effect. Eutectic structure consisted of layer by layer of ‘Al{sub 3}Sc + α-Al + Al{sub 3}Sc + ⋯’ and cellular-dendritic substructure were simultaneously observed at the particles inside, indicating that these particles couldmore » be identified as the eutectics rather than individual Al{sub 3}Sc phase. A calculating method, based on EBSD results, was introduced for the spatial distribution of these particles in matrix. The results showed that these eutectic particles randomly distributed in matrix. In addition, the formation of primary eutectic particles significant improved the strength of the Al-Mg alloy in as-cast condition, which is ascribed to the structural evolution from coarse dendrites to prefect fine equiaxed grains. On the other hand, these large-sized particles due to the tendency to act as the microcrack sources could cause a harmful effect in the ductility of Al-Mg-Sc alloy. - Highlights: •Primary particles exhibit an ‘Al{sub 3}Sc + α-Al + Al{sub 3}Sc + ⋯’ multilayer feature with a cellular-dendritic mode of growth. •EBSD analyses the mechanism of grain refinement and the distribution of primary particles in α-Al matrix. •A computational method was presented to calculate the habit planes of primary particles.« less
NASA Astrophysics Data System (ADS)
Douillet-Grellier, Thomas; Pramanik, Ranjan; Pan, Kai; Albaiz, Abdulaziz; Jones, Bruce D.; Williams, John R.
2017-10-01
This paper develops a method for imposing stress boundary conditions in smoothed particle hydrodynamics (SPH) with and without the need for dummy particles. SPH has been used for simulating phenomena in a number of fields, such as astrophysics and fluid mechanics. More recently, the method has gained traction as a technique for simulation of deformation and fracture in solids, where the meshless property of SPH can be leveraged to represent arbitrary crack paths. Despite this interest, application of boundary conditions within the SPH framework is typically limited to imposed velocity or displacement using fictitious dummy particles to compensate for the lack of particles beyond the boundary interface. While this is enough for a large variety of problems, especially in the case of fluid flow, for problems in solid mechanics there is a clear need to impose stresses upon boundaries. In addition to this, the use of dummy particles to impose a boundary condition is not always suitable or even feasibly, especially for those problems which include internal boundaries. In order to overcome these difficulties, this paper first presents an improved method for applying stress boundary conditions in SPH with dummy particles. This is then followed by a proposal of a formulation which does not require dummy particles. These techniques are then validated against analytical solutions to two common problems in rock mechanics, the Brazilian test and the penny-shaped crack problem both in 2D and 3D. This study highlights the fact that SPH offers a good level of accuracy to solve these problems and that results are reliable. This validation work serves as a foundation for addressing more complex problems involving plasticity and fracture propagation.
Training manuals for nondestructive testing using magnetic particles
NASA Technical Reports Server (NTRS)
1968-01-01
Training manuals containing the fundamentals of nondestructive testing using magnetic particle as detection media are used by metal parts inspectors and quality assurance specialists. Magnetic particle testing involves magnetization of the test specimen, application of the magnetic particle and interpretation of the patterns formed.
The Use of Tooth Particles as a Biomaterial in Post-Extraction Sockets. Experimental Study in Dogs.
Calvo-Guirado, José Luis; Maté-Sánchez de Val, José Eduardo; Ramos-Oltra, María Luisa; Pérez-Albacete Martínez, Carlos; Ramírez-Fernández, María Piedad; Maiquez-Gosálvez, Manuel; Gehrke, Sergio A; Fernández-Domínguez, Manuel; Romanos, Georgios E; Delgado-Ruiz, Rafael Arcesio
2018-05-06
Objectives : The objective of this study was to evaluate new bone formation derived from freshly crushed extracted teeth, grafted immediately in post-extraction sites in an animal model, compared with sites without graft filling, evaluated at 30 and 90 days. Material and Methods : The bilateral premolars P2, P3, P4 and the first mandibular molar were extracted atraumatically from six Beagle dogs. The clean, dry teeth were ground immediately using the Smart Dentin Grinder. The tooth particles obtained were subsequently sieved through a special sorting filter into two compartments; the upper container isolating particles over 1200 μm, the lower container isolated particles over 300 μm. The crushed teeth were grafted into the post-extraction sockets at P3, P4 and M1 (test group) (larger and smaller post-extraction alveoli), while P2 sites were left unfilled and acted as a control group. Tissue healing and bone formation were evaluated by histological and histomorphometric analysis after 30 and 90 days. Results : At 30 days, test site bone formation was greater in the test group than the control group ( p < 0.05); less immature bone was observed in the test group (25.71%) than the control group (55.98%). At 90 days, significant differences in bone formation were found with more in the test group than the control group. No significant differences were found in new bone formation when comparing the small and large alveoli post-extraction sites. Conclusions : Tooth particles extracted from dog’s teeth, grafted immediately after extractions can be considered a suitable biomaterial for socket preservation.
ParticleCall: A particle filter for base calling in next-generation sequencing systems
2012-01-01
Background Next-generation sequencing systems are capable of rapid and cost-effective DNA sequencing, thus enabling routine sequencing tasks and taking us one step closer to personalized medicine. Accuracy and lengths of their reads, however, are yet to surpass those provided by the conventional Sanger sequencing method. This motivates the search for computationally efficient algorithms capable of reliable and accurate detection of the order of nucleotides in short DNA fragments from the acquired data. Results In this paper, we consider Illumina’s sequencing-by-synthesis platform which relies on reversible terminator chemistry and describe the acquired signal by reformulating its mathematical model as a Hidden Markov Model. Relying on this model and sequential Monte Carlo methods, we develop a parameter estimation and base calling scheme called ParticleCall. ParticleCall is tested on a data set obtained by sequencing phiX174 bacteriophage using Illumina’s Genome Analyzer II. The results show that the developed base calling scheme is significantly more computationally efficient than the best performing unsupervised method currently available, while achieving the same accuracy. Conclusions The proposed ParticleCall provides more accurate calls than the Illumina’s base calling algorithm, Bustard. At the same time, ParticleCall is significantly more computationally efficient than other recent schemes with similar performance, rendering it more feasible for high-throughput sequencing data analysis. Improvement of base calling accuracy will have immediate beneficial effects on the performance of downstream applications such as SNP and genotype calling. ParticleCall is freely available at https://sourceforge.net/projects/particlecall. PMID:22776067
Effects of SiC on Properties of Cu-SiC Metal Matrix Composites
NASA Astrophysics Data System (ADS)
Efe, G. Celebi; Altinsoy, I.; Ipek, M.; Zeytin, S.; Bindal, C.
2011-12-01
This paper was focused on the effects of particle size and distribution on some properties of the SiC particle reinforced Cu composites. Copper powder produced by cementation method was reinforced with SiC particles having 1 and 30 μm particle size and sintered at 700 °C. SEM studies showed that SiC particles dispersed in copper matrix homogenously. The presence of Cu and SiC components in composites were verified by XRD analysis technique. The relative densities of Cu-SiC composites determined by Archimedes' principle are ranged from 96.2% to 90.9% for SiC with 1 μm particle size, 97.0 to 95.0 for SiC with 30 μm particle size. Measured hardness of sintered compacts varied from 130 to 155 HVN for SiC having 1 μm particle size, 188 to 229 HVN for SiC having 1 μm particle size. Maximum electrical conductivity of test materials was obtained as 80.0% IACS (International annealed copper standard) for SiC with 1 μm particle size and 83.0% IACS for SiC with 30 μm particle size.
NASA Astrophysics Data System (ADS)
Bhiftime, E. I.; Guterres, Natalino F. D. S.; Haryono, M. B.; Sulardjaka, Nugroho, Sri
2017-04-01
SiC particle reinforced metal matrix composites (MMCs) with solid semi stir casting method is becoming popular in recent application (automotive, aerospace). Stirring the semi solid condition is proven to enhance the bond between matrix and reinforcement. The purpose of this study is to investigate the effect of the SiC wt.% and the addition of borax on mechanical properties of composite AlSi-Mg-TiB-SiC and AlSi-Mg-TiB-SiC/Borax. Specimens was tested focusing on the density, porosity, tensile test, impact test microstructure and SEM. AlSi is used as a matrix reinforced by SiC with percentage variations (10, 15, 20 wt.%). Giving wt.% Borax which is the ratio of 1: 4 between wt.% SiC. The addition of 1.5% of TiB gives grain refinement. The use of semi-solid stir casting method is able to increase the absorption of SiC particles into a matrix AlSi evenly. The improved composite presented here can be used as a guideline to make a new composite.
Living microorganisms change the information (Shannon) content of a geophysical system.
Tang, Fiona H M; Maggi, Federico
2017-06-12
The detection of microbial colonization in geophysical systems is becoming of interest in various disciplines of Earth and planetary sciences, including microbial ecology, biogeochemistry, geomicrobiology, and astrobiology. Microorganisms are often observed to colonize mineral surfaces, modify the reactivity of minerals either through the attachment of their own biomass or the glueing of mineral particles with their mucilaginous metabolites, and alter both the physical and chemical components of a geophysical system. Here, we hypothesise that microorganisms engineer their habitat, causing a substantial change to the information content embedded in geophysical measures (e.g., particle size and space-filling capacity). After proving this hypothesis, we introduce and test a systematic method that exploits this change in information content to detect microbial colonization in geophysical systems. Effectiveness and robustness of this method are tested using a mineral sediment suspension as a model geophysical system; tests are carried out against 105 experiments conducted with different suspension types (i.e., pure mineral and microbially-colonized) subject to different abiotic conditions, including various nutrient and mineral concentrations, and different background entropy production rates. Results reveal that this method can systematically detect microbial colonization with less than 10% error in geophysical systems with low-entropy background production rate.
Optimized Non-Obstructive Particle Damping (NOPD) Treatment for Composite Honeycomb Structures
NASA Technical Reports Server (NTRS)
Panossian, H.
2008-01-01
Non-Obstructive Particle Damping (NOPD) technology is a passive vibration damping approach whereby metallic or non-metallic particles in spherical or irregular shapes, of heavy or light consistency, and even liquid particles are placed inside cavities or attached to structures by an appropriate means at strategic locations, to absorb vibration energy. The objective of the work described herein is the development of a design optimization procedure and discussion of test results for such a NOPD treatment on honeycomb (HC) composite structures, based on finite element modeling (FEM) analyses, optimization and tests. Modeling and predictions were performed and tests were carried out to correlate the test data with the FEM. The optimization procedure consisted of defining a global objective function, using finite difference methods, to determine the optimal values of the design variables through quadratic linear programming. The optimization process was carried out by targeting the highest dynamic displacements of several vibration modes of the structure and finding an optimal treatment configuration that will minimize them. An optimal design was thus derived and laboratory tests were conducted to evaluate its performance under different vibration environments. Three honeycomb composite beams, with Nomex core and aluminum face sheets, empty (untreated), uniformly treated with NOPD, and optimally treated with NOPD, according to the analytically predicted optimal design configuration, were tested in the laboratory. It is shown that the beam with optimal treatment has the lowest response amplitude. Described below are results of modal vibration tests and FEM analyses from predictions of the modal characteristics of honeycomb beams under zero, 50% uniform treatment and an optimal NOPD treatment design configuration and verification with test data.
Conservative, special-relativistic smoothed particle hydrodynamics
NASA Astrophysics Data System (ADS)
Rosswog, Stephan
2010-11-01
We present and test a new, special-relativistic formulation of smoothed particle hydrodynamics (SPH). Our approach benefits from several improvements with respect to earlier relativistic SPH formulations. It is self-consistently derived from the Lagrangian of an ideal fluid and accounts for the terms that stem from non-constant smoothing lengths, usually called “grad-h terms”. In our approach, we evolve the canonical momentum and the canonical energy per baryon and thus circumvent some of the problems that have plagued earlier formulations of relativistic SPH. We further use a much improved artificial viscosity prescription which uses the extreme local eigenvalues of the Euler equations and triggers selectively on (a) shocks and (b) velocity noise. The shock trigger accurately monitors the relative density slope and uses it to fine-tune the amount of artificial viscosity that is applied. This procedure substantially sharpens shock fronts while still avoiding post-shock noise. If not triggered, the viscosity parameter of each particle decays to zero. None of these viscosity triggers is specific to special relativity, both could also be applied in Newtonian SPH.The performance of the new scheme is explored in a large variety of benchmark tests where it delivers excellent results. Generally, the grad-h terms deliver minor, though worthwhile, improvements. As expected for a Lagrangian method, it performs close to perfect in supersonic advection tests, but also in strong relativistic shocks, usually considered a particular challenge for SPH, the method yields convincing results. For example, due to its perfect conservation properties, it is able to handle Lorentz factors as large as γ = 50,000 in the so-called wall shock test. Moreover, we find convincing results in a rarely shown, but challenging test that involves so-called relativistic simple waves and also in multi-dimensional shock tube tests.
NASA Astrophysics Data System (ADS)
Polprasert, Jirawadee; Ongsakul, Weerakorn; Dieu, Vo Ngoc
2011-06-01
This paper proposes a self-organizing hierarchical particle swarm optimization (SPSO) with time-varying acceleration coefficients (TVAC) for solving economic dispatch (ED) problem with non-smooth functions including multiple fuel options (MFO) and valve-point loading effects (VPLE). The proposed SPSO with TVAC is the new approach optimizer and good performance for solving ED problems. It can handle the premature convergence of the problem by re-initialization of velocity whenever particles are stagnated in the search space. To properly control both local and global explorations of the swarm during the optimization process, the performance of TVAC is included. The proposed method is tested in different ED problems with non-smooth cost functions and the obtained results are compared to those from many other methods in the literature. The results have revealed that the proposed SPSO with TVAC is effective in finding higher quality solutions for non-smooth ED problems than many other methods.
High Pressure Quick Disconnect Particle Impact Tests
NASA Technical Reports Server (NTRS)
Peralta, Stephen; Rosales, Keisa; Smith, Sarah R.; Stoltzfus, Joel M.
2007-01-01
To determine whether there is a particle impact ignition hazard in the quick disconnects (QDs) in the Environmental Control and Life Support System (ECLSS) on the International Space Station (ISS), NASA Johnson Space Center requested White Sands Test Facility (WSTF) to perform particle impact testing. Testing was performed from November 2006 through May 2007 and included standard supersonic and subsonic particle impact tests on 15-5 PH stainless steel, as well as tests performed on a QD simulator. This report summarizes the particle impact tests completed at WSTF. Although there was an ignition in Test Series 4, it was determined the ignition was caused by the presence of a machining imperfection. The sum of all the test results indicates that there is no particle impact ignition hazard in the ISS ECLSS QDs.
Kalman and particle filtering methods for full vehicle and tyre identification
NASA Astrophysics Data System (ADS)
Bogdanski, Karol; Best, Matthew C.
2018-05-01
This paper considers identification of all significant vehicle handling dynamics of a test vehicle, including identification of a combined-slip tyre model, using only those sensors currently available on most vehicle controller area network buses. Using an appropriately simple but efficient model structure, all of the independent parameters are found from test vehicle data, with the resulting model accuracy demonstrated on independent validation data. The paper extends previous work on augmented Kalman Filter state estimators to concentrate wholly on parameter identification. It also serves as a review of three alternative filtering methods; identifying forms of the unscented Kalman filter, extended Kalman filter and particle filter are proposed and compared for effectiveness, complexity and computational efficiency. All three filters are suited to applications of system identification and the Kalman Filters can also operate in real-time in on-line model predictive controllers or estimators.
NASA Astrophysics Data System (ADS)
Yang, X.; Xiao, C.; Chen, Y.; Xu, T.; Yu, Y.; Xu, M.; Wang, L.; Wang, X.; Lin, C.
2018-03-01
Recently, a new diagnostic method, Laser-driven Ion-beam Trace Probe (LITP), has been proposed to reconstruct 2D profiles of the poloidal magnetic field (Bp) and radial electric field (Er) in the tokamak devices. A linear assumption and test particle model were used in those reconstructions. In some toroidal devices such as the spherical tokamak and the Reversal Field Pinch (RFP), Bp is not small enough to meet the linear assumption. In those cases, the error of reconstruction increases quickly when Bp is larger than 10% of the toroidal magnetic field (Bt), and the previous test particle model may cause large error in the tomography process. Here a nonlinear reconstruction method is proposed for those cases. Preliminary numerical results show that LITP could be applied not only in tokamak devices, but also in other toroidal devices, such as the spherical tokamak, RFP, etc.
Quartz crystal microbalance as a sensing active element for rupture scanning within frequency band.
Dultsev, F N; Kolosovsky, E A
2011-02-14
A new method based on the use of quartz crystal microbalance (QCM) as an active sensing element is developed, optimized and tested in a model system to measure the rupture force and deduce size distribution of nanoparticles. As suggested by model predictions, the QCM is shaped as a strip. The ratio of rupture signals at the second and the third harmonics versus the geometric position of a body on QCM surface is investigated theoretically. Recommendations concerning the use of the method for measuring the nanoparticle size distribution are presented. It is shown experimentally for an ensemble of test particles with a characteristic size within 20-30 nm that the proposed method allows one to determine particle size distribution. On the basis of the position and value of the measured rupture signal, a histogram of particle size distribution and percentage of each size fraction were determined. The main merits of the bond-rupture method are its rapid response, simplicity and the ability to discriminate between specific and non-specific interactions. The method is highly sensitive with respect to mass (the sensitivity is generally dependent on the chemical nature of receptor and analyte and may reach 8×10(-14) g mm(-2)) and applicable to measuring rupture forces either for weak bonds, for example hydrogen bonds, or for strong covalent bonds (10(-11)-10(-9) N). This procedure may become a good alternative for the existing methods, such as AFM or optical methods of determining biological objects, and win a broad range of applications both in laboratory research and in biosensing for various purposes. Possible applications include medicine, diagnostics, environmental or agricultural monitoring. Copyright © 2010 Elsevier B.V. All rights reserved.
2014-06-19
product used as a diesel product for ground use (1). Free water contamination (droplets) may appear as fine droplets or slugs of water in the fuel...methods and test procedures for the calibration and use of automatic particle counters. The transition of this technology to the fuel industry is...UNCLASSIFIED 6 UNCLASSIFIED Receipt Vehicle Fuel Tank Fuel Injector Aviation Fuel DEF (AUST) 5695B 18/16/13 Parker 18
Latex samples for RAMSES electrophoresis experiment on IML 2
NASA Technical Reports Server (NTRS)
Seaman, Geoffrey V. F.; Knox, Robert J.
1994-01-01
The objectives of these reported studies were to provide ground based support services for the flight experiment team for the RAMSES experiment to be flown aboard IML-2. The specific areas of support included consultation on the performance of particle based electrophoresis studies, development of methods for the preparation of suitable samples for the flight hardware, the screening of particles to obtain suitable candidates for the flight experiment, and the electrophoretic characterization of sample particle preparations. The first phases of these studies were performed under this contract, while the follow on work was performed under grant number NAG8 1081, 'Preparation and Characterization of Latex Samples for RAMSES Experiment on IML 2.' During this first phase of the experiment the following benchmarks were achieved: Methods were tested for the concentration and resuspension of latex samples in the greater than 0.4 micron diameter range to provide moderately high solids content samples free of particle aggregation which interferred with the normal functioning of the RAMSES hardware. Various candidate latex preparations were screened and two candidate types of latex were identified for use in the flight experiments, carboxylate modified latex (CML) and acrylic acid-acrylamide modified latex (AAM). These latexes have relatively hydrophilic surfaces, are not prone to aggregate, and display sufficiently low electrophoretic mobilities in the flight buffer so that they can be used to make mixtures to test the resolving power of the flight hardware.
Image pre-processing method for near-wall PIV measurements over moving curved interfaces
NASA Astrophysics Data System (ADS)
Jia, L. C.; Zhu, Y. D.; Jia, Y. X.; Yuan, H. J.; Lee, C. B.
2017-03-01
PIV measurements near a moving interface are always difficult. This paper presents a PIV image pre-processing method that returns high spatial resolution velocity profiles near the interface. Instead of re-shaping or re-orientating the interrogation windows, interface tracking and an image transformation are used to stretch the particle image strips near a curved interface into rectangles. Then the adaptive structured interrogation windows can be arranged at specified distances from the interface. Synthetic particles are also added into the solid region to minimize interfacial effects and to restrict particles on both sides of the interface. Since a high spatial resolution is only required in high velocity gradient region, adaptive meshing and stretching of the image strips in the normal direction is used to improve the cross-correlation signal-to-noise ratio (SN) by reducing the velocity difference and the particle image distortion within the interrogation window. A two dimensional Gaussian fit is used to compensate for the effects of stretching particle images. The working hypothesis is that fluid motion near the interface is ‘quasi-tangential flow’, which is reasonable in most fluid-structure interaction scenarios. The method was validated against the window deformation iterative multi-grid scheme (WIDIM) using synthetic image pairs with different velocity profiles. The method was tested for boundary layer measurements of a supersonic turbulent boundary layer on a flat plate, near a rotating blade and near a flexible flapping flag. This image pre-processing method provides higher spatial resolution than conventional WIDIM and good robustness for measuring velocity profiles near moving interfaces.
PaDe - The particle detection program
NASA Astrophysics Data System (ADS)
Ott, T.; Drolshagen, E.; Koschny, D.; Poppe, B.
2016-01-01
This paper introduces the Particle Detection program PaDe. Its aim is to analyze dust particles in the coma of the Jupiter-family comet 67P/Churyumov-Gerasimenko which were recorded by the two OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) cameras onboard the ESA spacecraft Rosetta, see e.g. Keller et al. (2007). In addition to working with the Rosetta data, the code was modified to work with images from meteors. It was tested with data recorded by the ICCs (Intensified CCD Cameras) of the CILBO-System (Canary Island Long-Baseline Observatory) on the Canary Islands; compare Koschny et al. (2013). This paper presents a new method for the position determination of the observed meteors. The PaDe program was written in Python 3.4. Its original intent is to find the trails of dust particles in space from the OSIRIS images. For that it determines the positions where the trail starts and ends. They were found using a fit following the so-called error function (Andrews, 1998) for the two edges of the profiles. The positions where the intensities fall to the half maximum were found to be the beginning and end of the particle. In the case of meteors, this method can be applied to find the leading edge of the meteor. The proposed method has the potential to increase the accuracy of the position determination of meteors dramatically. Other than the standard method of finding the photometric center, our method is not influenced by any trails or wakes behind the meteor. This paper presents first results of this ongoing work.
NASA Astrophysics Data System (ADS)
Widiyandari, Hendri; Ayu Ketut Umiati, Ngurah; Dwi Herdianti, Rizki
2018-05-01
Advance oxidation process (AOP) using photocatalysis constitute a promising technology for the treatment of wastewaters containing non-easily removable organic compound. Zinc oxide (ZnO) is one of efficient photocatalyst materials. This research reported synthesis of ZnO fine particle from zinc nitrate hexahydrate using Flame Spray Pyrolysis (FSP) method. In this method, oxygen (O2) gas were used as oxidizer and LPG (liquid petroleum gas) were used as fuel. The effect of O2 gas flow rate during ZnO particle fabrication to the microstructure, optical and photocatalytic properties were systematically discussed. The photocatalytic activity of ZnO was tested for the degradation of amaranth dye with initial concentration of 10 ppm under irradiation of solar simulator. The rate of decrease in amaranth concentration was measured using UV-Visible spectrophotometer. The ZnO synthesized using FSP has a hexagonal crystalline structure. Scanning electron microscope images showed that ZnO has a spherical formed which was the mixture of solid and hollow particles. The optimum condition for amaranth degradation was shown by ZnO produced at a flow rate of 1.5 L/min which able to degrade amaranth dye up to 95,3 % at 75 minutes irradiation.
Homogeneous Biosensing Based on Magnetic Particle Labels
Schrittwieser, Stefan; Pelaz, Beatriz; Parak, Wolfgang J.; Lentijo-Mozo, Sergio; Soulantica, Katerina; Dieckhoff, Jan; Ludwig, Frank; Guenther, Annegret; Tschöpe, Andreas; Schotter, Joerg
2016-01-01
The growing availability of biomarker panels for molecular diagnostics is leading to an increasing need for fast and sensitive biosensing technologies that are applicable to point-of-care testing. In that regard, homogeneous measurement principles are especially relevant as they usually do not require extensive sample preparation procedures, thus reducing the total analysis time and maximizing ease-of-use. In this review, we focus on homogeneous biosensors for the in vitro detection of biomarkers. Within this broad range of biosensors, we concentrate on methods that apply magnetic particle labels. The advantage of such methods lies in the added possibility to manipulate the particle labels by applied magnetic fields, which can be exploited, for example, to decrease incubation times or to enhance the signal-to-noise-ratio of the measurement signal by applying frequency-selective detection. In our review, we discriminate the corresponding methods based on the nature of the acquired measurement signal, which can either be based on magnetic or optical detection. The underlying measurement principles of the different techniques are discussed, and biosensing examples for all techniques are reported, thereby demonstrating the broad applicability of homogeneous in vitro biosensing based on magnetic particle label actuation. PMID:27275824
Lagrangian transported MDF methods for compressible high speed flows
NASA Astrophysics Data System (ADS)
Gerlinger, Peter
2017-06-01
This paper deals with the application of thermochemical Lagrangian MDF (mass density function) methods for compressible sub- and supersonic RANS (Reynolds Averaged Navier-Stokes) simulations. A new approach to treat molecular transport is presented. This technique on the one hand ensures numerical stability of the particle solver in laminar regions of the flow field (e.g. in the viscous sublayer) and on the other hand takes differential diffusion into account. It is shown in a detailed analysis, that the new method correctly predicts first and second-order moments on the basis of conventional modeling approaches. Moreover, a number of challenges for MDF particle methods in high speed flows is discussed, e.g. high cell aspect ratio grids close to solid walls, wall heat transfer, shock resolution, and problems from statistical noise which may cause artificial shock systems in supersonic flows. A Mach 2 supersonic mixing channel with multiple shock reflection and a model rocket combustor simulation demonstrate the eligibility of this technique to practical applications. Both test cases are simulated successfully for the first time with a hybrid finite-volume (FV)/Lagrangian particle solver (PS).
Preparation and optical properties of iron-modified titanium dioxide obtained by sol-gel method
NASA Astrophysics Data System (ADS)
Hreniak, Agnieszka; Gryzło, Katarzyna; Boharewicz, Bartosz; Sikora, Andrzej; Chmielowiec, Jacek; Iwan, Agnieszka
2015-08-01
In this paper twelve TiO2:Fe powders prepared by sol-gel method were analyzed being into consideration the kind of iron compound applied. As a precursor titanium (IV) isopropoxide (TIPO) was used, while as source of iron Fe(NO3)3 or FeCl3 were tested. Fe doped TiO2 was obtained using two methods of synthesis, where different amount of iron was added (1, 5 or 10% w/w). The size of obtained TiO2:Fe particles depends on the iron compound applied and was found in the range 80-300 nm as it was confirmed by SEM technique. TiO2:Fe particles were additionally investigated by dynamic light scattering (DLS) method. Additionally, for the TiO2:Fe particles UV-vis absorption and the zeta potential were analyzed. Selected powders were additionally investigated by magnetic force microscopy (MFM) and X-ray diffraction techniques. Photocatalytic ability of Fe doped TiO2 powders was evaluated by means of cholesteryl hemisuccinate (CHOL) degradation experiment conducted under the 30 min irradiation of simulated solar light.
Laser Synthesis of Supported Catalysts for Carbon Nanotubes
NASA Technical Reports Server (NTRS)
VanderWal, Randall L.; Ticich, Thomas M.; Sherry, Leif J.; Hall, Lee J.; Schubert, Kathy (Technical Monitor)
2003-01-01
Four methods of laser assisted catalyst generation for carbon nanotube (CNT) synthesis have been tested. These include pulsed laser transfer (PLT), photolytic deposition (PLD), photothermal deposition (PTD) and laser ablation deposition (LABD). Results from each method are compared based on CNT yield, morphology and structure. Under the conditions tested, the PLT was the easiest method to implement, required the least time and also yielded the best pattemation. The photolytic and photothermal methods required organometallics, extended processing time and partial vacuums. The latter two requirements also held for the ablation deposition approach. In addition to control of the substrate position, controlled deposition duration was necessary to achieve an active catalyst layer. Although all methods were tested on both metal and quartz substrates, only the quartz substrates proved to be inactive towards the deposited catalyst particles.
A Study of the Effects of Relative Humidity on Small Particle Adhesion to Surfaces
NASA Technical Reports Server (NTRS)
Whitfield, W. J.; David, T.
1971-01-01
Ambient dust ranging in size from less than one micron up to 140 microns was used as test particles. Relative humidities of 33% to 100% were used to condition test surfaces after loading with the test particles. A 20 psi nitrogen blowoff was used as the removal mechanism to test for particle adhesion. Particles were counted before and after blowoff to determine retention characteristics. Particle adhesion increased drastically as relative humidity increased above 50%. The greatest adhesion changes occurred within the first hour of conditioning time. Data are presented for total particle adhesion, for particles 10 microns and larger, and 50 microns and larger.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Yanlin; Wang, Mi; Yao, Jun
2014-04-11
Electrical impedance tomography (EIT) is one of the process tomography techniques to provide an on-line non-invasive imaging for multiphase flow measurement. With EIT measurements, the images of impedance real part, impedance imaginary part, phase angle, and magnitude can be obtained. However, most of the applications of EIT in the process industries rely on the conductivity difference between two phases in fluids to obtain the concentration profiles. It is not common to use the imaginary part or phase angle due to the dominant change in conductivity or complication in the use of other impedance information. In a solid-liquid two phases systemmore » involving nano- or submicro-particles, characterisation of particles (e.g. particle size and concentration) have to rely on the measurement of impedance phase angle or imaginary part. Particles in a solution usually have an electrical double layer associated with their surfaces and can form an induced electrical dipole moment due to the polarization of the electrical double layer under the influence of an alternating electric field. Similar to EIT, electrical impedance spectroscopy (EIS) measurement can record the electrical impedance data, including impedance real part, imaginary part and phase angle (θ), which are caused by the polarization of the electrical double layer. These impedance data are related to the particle characteristics e.g. particle size, particle and ionic concentrations in the aqueous medium, therefore EIS method provides a capability for characterising the particles in suspensions. Electrical impedance tomography based on EIS measurement or namely, electrical impedance tomography spectroscopy (EITS) could image the spatial distribution of particle characteristics. In this paper, a new method, including test set-up and data analysis, for characterisation of particles in suspensions are developed through the experimental approach. The experimental results on tomographic imaging of colloidal particles based on EIS measurement using a sensor of 8 electrodes are reported. Results have demonstrated the potential as well as revealed the challenge in the use of EIS and EITS for characterisation of particle in suspension.« less
Effect of Particle Size Distribution on Wall Heat Flux in Pulverized-Coal Furnaces and Boilers
NASA Astrophysics Data System (ADS)
Lu, Jun
A mathematical model of combustion and heat transfer within a cylindrical enclosure firing pulverized coal has been developed and tested against two sets of measured data (one is 1993 WSU/DECO Pilot test data, the other one is the International Flame Research Foundation 1964 Test (Beer, 1964)) and one independent code FURN3D from the Argonne National Laboratory (Ahluwalia and IM, 1992). The model called PILC assumes that the system is a sequence of many well-stirred reactors. A char burnout model combining diffusion to the particle surface, pore diffusion, and surface reaction is employed for predicting the char reaction, heat release, and evolution of char. The ash formation model included relates the ash particle size distribution to the particle size distribution of pulverized coal. The optical constants of char and ash particles are calculated from dispersion relations derived from reflectivity, transmissivity and extinction measurements. The Mie theory is applied to determine the extinction and scattering coefficients. The radiation heat transfer is modeled using the virtual zone method, which leads to a set of simultaneous nonlinear algebraic equations for the temperature field within the furnace and on its walls. This enables the heat fluxes to be evaluated. In comparisons with the experimental data and one independent code, the model is successful in predicting gas temperature, wall temperature, and wall radiative flux. When the coal with greater fineness is burnt, the particle size of pulverized coal has a consistent influence on combustion performance: the temperature peak was higher and nearer to burner, the radiation flux to combustor wall increased, and also the absorption and scattering coefficients of the combustion products increased. The effect of coal particle size distribution on absorption and scattering coefficients and wall heat flux is significant. But there is only a small effect on gas temperature and fuel fraction burned; it is speculated that this may be a characteristic special to the test combustor used.
Materials Combustion Testing and Combustion Product Sensor Evaluations in FY12
NASA Technical Reports Server (NTRS)
Meyer, Marit Elisabeth; Mudgett, Paul D.; Hornung, Steven D.; McClure, Mark B.; Pilgrim, Jeffrey S.; Bryg, Victoria; Makel, Darby; Ruff, Gary A.; Hunter, Gary
2013-01-01
NASA Centers continue to collaborate to characterize the chemical species and smoke particles generated by the combustion of current space-rated non-metallic materials including fluoropolymers. This paper describes the results of tests conducted February through September 2012 to identify optimal chemical markers both for augmenting particle-based fire detection methods and for monitoring the post-fire cleanup phase in human spacecraft. These studies follow up on testing conducted in August 2010 and reported at ICES 2011. The tests were conducted at the NASA White Sands Test Facility in a custom glove box designed for burning fractional gram quantities of materials under varying heating profiles. The 623 L chamber was heavily instrumented to quantify organics (gas chromatography/mass spectrometry), inorganics by water extraction followed by ion chromatography, and select species by various individual commercially-available sensors. Evaluating new technologies for measuring carbon monoxide, hydrogen cyanide, hydrogen fluoride, hydrogen chloride and other species of interest was a key objective of the test. Some of these sensors were located inside the glovebox near the fire source to avoid losses through the sampling lines; the rest were located just outside the glovebox. Instruments for smoke particle characterization included a Tapered Element Oscillating Microbalance Personal Dust Monitor (TEOM PDM) and a TSI Dust Trak DRX to measure particle mass concentration, a TSI PTrak for number concentration and a thermal precipitator for collection of particles for microscopic analysis. Materials studied included Nomex®, M22759 wire insulation, granulated circuit board, polyvinyl chloride (PVC), Polytetrafluoroethylene (PTFE), Kapton®, and mixtures of PTFE and Kapton®. Furnace temperatures ranged from 340o to 640o C, focusing on the smoldering regime. Of particular interest in these tests was confirming burn repeatability and production of acid gases with different fuel mixture compositions, as well as the dependence of aerosol concentrations on temperature.
Materials Combustion Testing and Combustion Product Sensor Evaluations in FY12
NASA Technical Reports Server (NTRS)
Meyer, Marit E.; Hunter, Gary; Ruff, Gary; Mudgett, Paul D.; Hornung, Steven D.; McClure, Mark B.; Pilgrim, Jeffrey S.; Bryg, Victoria; Makel, Darby
2013-01-01
NASA Centers continue to collaborate to characterize the chemical species and smoke particles generated by the combustion of current space-rated non-metallic materials including fluoropolymers. This paper describes the results of tests conducted February through September 2012 to identify optimal chemical markers both for augmenting particle-based fire detection methods and for monitoring the post-fire cleanup phase in human spacecraft. These studies follow up on testing conducted in August 2010 and reported at ICES 2011. The tests were conducted at the NASA White Sands Test Facility in a custom glove box designed for burning fractional gram quantities of materials under varying heating profiles. The 623 L chamber was heavily instrumented to quantify organics (gas chromatography/mass spectrometry), inorganics by water extraction followed by ion chromatography, and select species by various individual commercially-available sensors. Evaluating new technologies for measuring carbon monoxide, hydrogen cyanide, hydrogen fluoride, hydrogen chloride and other species of interest was a key objective of the test. Some of these sensors were located inside the glovebox near the fire source to avoid losses through the sampling lines; the rest were located just outside the glovebox. Instruments for smoke particle characterization included a Tapered Element Oscillating Microbalance Personal Dust Monitor (TEOM PDM) and a TSI Dust Trak DRX to measure particle mass concentration, a TSI PTrak for number concentration and a thermal precipitator for collection of particles for microscopic analysis. Materials studied included Nomex(R), M22759 wire insulation, granulated circuit board, polyvinyl chloride (PVC), Polytetrafluoroethylene (PTFE), Kapton(R), and mixtures of PTFE and Kapton(R). Furnace temperatures ranged from 340 to 640 C, focusing on the smoldering regime. Of particular interest in these tests was confirming burn repeatability and production of acid gases with different fuel mixture compositions, as well as the dependence of aerosol concentrations on temperature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collin, Blaise P.; Petti, David A.; Demkowicz, Paul A.
Safety tests were conducted on fuel compacts from AGR-1, the first irradiation experiment of the Advanced Gas Reactor (AGR) Fuel Development and Qualification program, at temperatures ranging from 1600 to 1800 °C to determine fission product release at temperatures that bound reactor accident conditions. The PARFUME (PARticle FUel ModEl) code was used to predict the release of fission products silver, cesium, strontium, and krypton from fuel compacts containing tristructural isotropic (TRISO) coated particles during 15 of these safety tests. Comparisons between PARFUME predictions and post-irradiation examination results of the safety tests were conducted on two types of AGR-1 compacts: compactsmore » containing only intact particles and compacts containing one or more particles whose SiC layers failed during safety testing. In both cases, PARFUME globally over-predicted the experimental release fractions by several orders of magnitude: more than three (intact) and two (failed SiC) orders of magnitude for silver, more than three and up to two orders of magnitude for strontium, and up to two and more than one orders of magnitude for krypton. The release of cesium from intact particles was also largely over-predicted (by up to five orders of magnitude) but its release from particles with failed SiC was only over-predicted by a factor of about 3. These over-predictions can be largely attributed to an over-estimation of the diffusivities used in the modeling of fission product transport in TRISO-coated particles. The integral release nature of the data makes it difficult to estimate the individual over-estimations in the kernel or each coating layer. Nevertheless, a tentative assessment of correction factors to these diffusivities was performed to enable a better match between the modeling predictions and the safety testing results. The method could only be successfully applied to silver and cesium. In the case of strontium, correction factors could not be assessed because potential release during the safety tests could not be distinguished from matrix content released during irradiation. Furthermore, in the case of krypton, all the coating layers are partly retentive and the available data did not allow the level of retention in individual layers to be determined, hence preventing derivation of any correction factors.« less
Collin, Blaise P.; Petti, David A.; Demkowicz, Paul A.; ...
2016-04-07
Safety tests were conducted on fuel compacts from AGR-1, the first irradiation experiment of the Advanced Gas Reactor (AGR) Fuel Development and Qualification program, at temperatures ranging from 1600 to 1800 °C to determine fission product release at temperatures that bound reactor accident conditions. The PARFUME (PARticle FUel ModEl) code was used to predict the release of fission products silver, cesium, strontium, and krypton from fuel compacts containing tristructural isotropic (TRISO) coated particles during 15 of these safety tests. Comparisons between PARFUME predictions and post-irradiation examination results of the safety tests were conducted on two types of AGR-1 compacts: compactsmore » containing only intact particles and compacts containing one or more particles whose SiC layers failed during safety testing. In both cases, PARFUME globally over-predicted the experimental release fractions by several orders of magnitude: more than three (intact) and two (failed SiC) orders of magnitude for silver, more than three and up to two orders of magnitude for strontium, and up to two and more than one orders of magnitude for krypton. The release of cesium from intact particles was also largely over-predicted (by up to five orders of magnitude) but its release from particles with failed SiC was only over-predicted by a factor of about 3. These over-predictions can be largely attributed to an over-estimation of the diffusivities used in the modeling of fission product transport in TRISO-coated particles. The integral release nature of the data makes it difficult to estimate the individual over-estimations in the kernel or each coating layer. Nevertheless, a tentative assessment of correction factors to these diffusivities was performed to enable a better match between the modeling predictions and the safety testing results. The method could only be successfully applied to silver and cesium. In the case of strontium, correction factors could not be assessed because potential release during the safety tests could not be distinguished from matrix content released during irradiation. Furthermore, in the case of krypton, all the coating layers are partly retentive and the available data did not allow the level of retention in individual layers to be determined, hence preventing derivation of any correction factors.« less
Deering, Cassandra E; Tadjiki, Soheyl; Assemi, Shoeleh; Miller, Jan D; Yost, Garold S; Veranth, John M
2008-01-01
A novel methodology to detect unlabeled inorganic nanoparticles was experimentally demonstrated using a mixture of nano-sized (70 nm) and submicron (250 nm) silicon dioxide particles added to mammalian tissue. The size and concentration of environmentally relevant inorganic particles in a tissue sample can be determined by a procedure consisting of matrix digestion, particle recovery by centrifugation, size separation by sedimentation field-flow fractionation (SdFFF), and detection by light scattering. Background Laboratory nanoparticles that have been labeled by fluorescence, radioactivity, or rare elements have provided important information regarding nanoparticle uptake and translocation, but most nanomaterials that are commercially produced for industrial and consumer applications do not contain a specific label. Methods Both nitric acid digestion and enzyme digestion were tested with liver and lung tissue as well as with cultured cells. Tissue processing with a mixture of protease enzymes is preferred because it is applicable to a wide range of particle compositions. Samples were visualized via fluorescence microscopy and transmission electron microscopy to validate the SdFFF results. We describe in detail the tissue preparation procedures and discuss method sensitivity compared to reported levels of nanoparticles in vivo. Conclusion Tissue digestion and SdFFF complement existing techniques by precisely identifying unlabeled metal oxide nanoparticles and unambiguously distinguishing nanoparticles (diameter<100 nm) from both soluble compounds and from larger particles of the same nominal elemental composition. This is an exciting capability that can facilitate epidemiological and toxicological research on natural and manufactured nanomaterials. PMID:19055780
A Meshless Method for Magnetohydrodynamics and Applications to Protoplanetary Disks
NASA Astrophysics Data System (ADS)
McNally, Colin P.
2012-08-01
This thesis presents an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. The code has been parallelized by adapting the framework provided by Gadget-2. A set of standard test problems, including one part in a million amplitude linear MHD waves, magnetized shock tubes, and Kelvin-Helmholtz instabilities are presented. Finally we demonstrate good agreement with analytic predictions of linear growth rates for magnetorotational instability in a cylindrical geometry. We provide a rigorous methodology for verifying a numerical method on two dimensional Kelvin-Helmholtz instability. The test problem was run in the Pencil Code, Athena, Enzo, NDSPHMHD, and Phurbas. A strict comparison, judgment, or ranking, between codes is beyond the scope of this work, although this work provides the mathematical framewor! k needed for such a study. Nonetheless, how the test is posed circumvents the issues raised by tests starting from a sharp contact discontinuity yet it still shows the poor performance of Smoothed Particle Hydrodynamics. We then comment on the connection between this behavior and the underlying lack of zeroth-order consistency in Smoothed Particle Hydrodynamics interpolation. In astrophysical magnetohydrodynamics (MHD) and electrodynamics simulations, numerically enforcing the divergence free constraint on the magnetic field has been difficult. We observe that for point-based discretization, as used in finite-difference type and pseudo-spectral methods, the divergence free constraint can be satisfied entirely by a choice of interpolation used to define the derivatives of the magnetic field. As an example we demonstrate a new class of finite-difference type derivative operators on a regular grid which has the divergence free property. This principle clarifies the nature of magnetic monopole errors. The principles and techniques demonstrated in this chapter are particularly useful for the magnetic field, but can be applied to any vector field. Finally, we examine global zoom-in simulations of turbulent magnetorotationally unstable flow. We extract and analyze the high-current regions produced in the turbulent flow. Basic parameters of these regions are abstracted, and we build one dimensional models including non-ideal MHD, and radiative transfer. For sufficiently high temperatures, an instability resulting from the temperature dependence of the Ohmic resistivity is found. This instability concentrates current sheets, resulting in the possibility of rapid heating from temperatures on the order of 600 Kelvin to 2000 Kelvin in magnetorotationally turbulent regions of protoplanetary disks. This is a possible local mechanism for the melting of chondrules and the formation of other high-temperature materials in protoplanetary disks.
Adequacy of laser diffraction for soil particle size analysis
Fisher, Peter; Aumann, Colin; Chia, Kohleth; O'Halloran, Nick; Chandra, Subhash
2017-01-01
Sedimentation has been a standard methodology for particle size analysis since the early 1900s. In recent years laser diffraction is beginning to replace sedimentation as the prefered technique in some industries, such as marine sediment analysis. However, for the particle size analysis of soils, which have a diverse range of both particle size and shape, laser diffraction still requires evaluation of its reliability. In this study, the sedimentation based sieve plummet balance method and the laser diffraction method were used to measure the particle size distribution of 22 soil samples representing four contrasting Australian Soil Orders. Initially, a precise wet riffling methodology was developed capable of obtaining representative samples within the recommended obscuration range for laser diffraction. It was found that repeatable results were obtained even if measurements were made at the extreme ends of the manufacturer’s recommended obscuration range. Results from statistical analysis suggested that the use of sample pretreatment to remove soil organic carbon (and possible traces of calcium-carbonate content) made minor differences to the laser diffraction particle size distributions compared to no pretreatment. These differences were found to be marginally statistically significant in the Podosol topsoil and Vertosol subsoil. There are well known reasons why sedimentation methods may be considered to ‘overestimate’ plate-like clay particles, while laser diffraction will ‘underestimate’ the proportion of clay particles. In this study we used Lin’s concordance correlation coefficient to determine the equivalence of laser diffraction and sieve plummet balance results. The results suggested that the laser diffraction equivalent thresholds corresponding to the sieve plummet balance cumulative particle sizes of < 2 μm, < 20 μm, and < 200 μm, were < 9 μm, < 26 μm, < 275 μm respectively. The many advantages of laser diffraction for soil particle size analysis, and the empirical results of this study, suggest that deployment of laser diffraction as a standard test procedure can provide reliable results, provided consistent sample preparation is used. PMID:28472043
Application of real-time radiation dosimetry using a new silicon LET sensor
NASA Technical Reports Server (NTRS)
Doke, T.; Hayashi, T.; Kikuchi, J.; Nagaoka, S.; Nakano, T.; Sakaguchi, T.; Terasawa, K.; Badhwar, G. D.
1999-01-01
A new type of real-time radiation monitoring device, RRMD-III, consisting of three double-sided silicon strip detectors (DSSDs), has been developed and tested on-board the Space Shuttle mission STS-84. The test succeeded in measuring the linear energy transfer (LET) distribution over the range of 0.2 keV/micrometer to 600 keV/micrometer for 178 h. The Shuttle cruised at an altitude of 300 to 400 km and an inclination angle of 51.6 degrees for 221.3 h, which is equivalent to the International Space Station orbit. The LET distribution obtained for particles was investigated by separating it into galactic cosmic ray (GCR) particles and trapped particles in the South Atlantic Anomaly (SAA) region. The result shows that the contribution in dose-equivalent due to GCR particles is almost equal to that from trapped particles. The total absorbed dose rate during the mission was 0.611 mGy/day; the effective quality factor, 1.64; and the dose equivalent rate, 0.998 mSv/day. The average absorbed dose rates are 0.158 mGy/min for GCR particles and 3.67 mGy/min for trapped particles. The effective quality factors are 2.48 for GCR particles and 1.19 for trapped particles. The absorbed doses obtained by the RRMD-III and a conventional method using TLD (Mg(2)SiO(4)), which was placed around the RRMD-III were compared. It was found that the TLDs showed a lower efficiency, just 58% of absorbed dose registered by the RRMD-III.
NASA Astrophysics Data System (ADS)
Esmaeilzare, Amir; Rezaei, Seyed Mehdi; Ramezanzadeh, Bahram
2018-04-01
Magnetorheological fluid is composed of micro-size carbonyl iron (CI) particles for polishing of optical substrates. In this paper, the corrosion resistance of carbonyl iron (CI) particles modified with three inorganic thin films based on rare earth elements, including cerium oxide (CeO2), lanthanum oxide (La2O3) and praseodymium oxide (Pr2O3), was investigated. The morphology and chemistry of the CI-Ce, CI-Pr and CI-La particles were examined by high resolution Field Emission-Scanning Electron Microscopy (FE-SEM), X-ray energy dispersive spectroscopy (EDS) and X-ray photoelectron spectroscopy (XPS). The electrochemical impedance spectroscopy (EIS) and potentiodynamic polarization tests were carried out to investigate the corrosion behavior of CI particles in aquatic environment. In addition, the Vibrating Sample Magnetometer (VSM) technique was utilized for determination of magnetic saturation properties of the coated particles. Afterwards, gas pycnometry and contact angle measurement methods were implemented to evaluate the density and hydrophilic properties of these particles. The results showed that deposition of all thin films increased the hydrophilic nature of these particles. In addition, it was observed that the amount of magnetic saturation properties attenuation for Pr2O3 and La2O3 films is greater than the CeO2 film. The EIS and polarization tests results confirmed that the CI-Ce had the maximum corrosion resistant among other samples. In addition, the thermogravimetric analysis (TGA) showed that the ceria coating provided particles with enhanced surface oxidation resistance.
Nonlinear data assimilation using synchronization in a particle filter
NASA Astrophysics Data System (ADS)
Rodrigues-Pinheiro, Flavia; Van Leeuwen, Peter Jan
2017-04-01
Current data assimilation methods still face problems in strongly nonlinear cases. A promising solution is a particle filter, which provides a representation of the model probability density function by a discrete set of particles. However, the basic particle filter does not work in high-dimensional cases. The performance can be improved by considering the proposal density freedom. A potential choice of proposal density might come from the synchronisation theory, in which one tries to synchronise the model with the true evolution of a system using one-way coupling via the observations. In practice, an extra term is added to the model equations that damps growth of instabilities on the synchronisation manifold. When only part of the system is observed synchronization can be achieved via a time embedding, similar to smoothers in data assimilation. In this work, two new ideas are tested. First, ensemble-based time embedding, similar to an ensemble smoother or 4DEnsVar is used on each particle, avoiding the need for tangent-linear models and adjoint calculations. Tests were performed using Lorenz96 model for 20, 100 and 1000-dimension systems. Results show state-averaged synchronisation errors smaller than observation errors even in partly observed systems, suggesting that the scheme is a promising tool to steer model states to the truth. Next, we combine these efficient particles using an extension of the Implicit Equal-Weights Particle Filter, a particle filter that ensures equal weights for all particles, avoiding filter degeneracy by construction. Promising results will be shown on low- and high-dimensional Lorenz96 models, and the pros and cons of these new ideas will be discussed.
NASA Technical Reports Server (NTRS)
Palaszewski, Bryan
2005-01-01
This report presents particle formation observations and detailed analyses of the images from experiments that were conducted on the formation of solid hydrogen particles in liquid helium. Hydrogen was frozen into particles in liquid helium, and observed with a video camera. The solid hydrogen particle sizes and the total mass of hydrogen particles were estimated. These newly analyzed data are from the test series held on February 28, 2001. Particle sizes from previous testing in 1999 and the testing in 2001 were similar. Though the 2001 testing created similar particles sizes, many new particle formation phenomena were observed: microparticles and delayed particle formation. These experiment image analyses are some of the first steps toward visually characterizing these particles, and they allow designers to understand what issues must be addressed in atomic propellant feed system designs for future aerospace vehicles.
Characterization of magnetic colloids by means of magnetooptics.
Baraban, L; Erbe, A; Leiderer, P
2007-05-01
A new, efficient method for the characterization of magnetic colloids based on the Faraday effect is proposed. According to the main principles of this technique, it is possible to detect the stray magnetic field of the colloidal particles induced inside the magnetooptical layer. The magnetic properties of individual particles can be determined providing measurements in a wide range of magnetic fields. The magnetization curves of capped colloids and paramagnetic colloids were measured by means of the proposed approach. The registration of the magnetooptical signals from each colloidal particle in an ensemble permits the use of this technique for testing the magnetic monodispersity of colloidal suspensions.
Inversion of particle-size distribution from angular light-scattering data with genetic algorithms.
Ye, M; Wang, S; Lu, Y; Hu, T; Zhu, Z; Xu, Y
1999-04-20
A stochastic inverse technique based on a genetic algorithm (GA) to invert particle-size distribution from angular light-scattering data is developed. This inverse technique is independent of any given a priori information of particle-size distribution. Numerical tests show that this technique can be successfully applied to inverse problems with high stability in the presence of random noise and low susceptibility to the shape of distributions. It has also been shown that the GA-based inverse technique is more efficient in use of computing time than the inverse Monte Carlo method recently developed by Ligon et al. [Appl. Opt. 35, 4297 (1996)].
Fernández-Cidón, Bárbara; Padró-Miquel, Ariadna; Alía-Ramos, Pedro; Castro-Castro, María José; Fanlo-Maresma, Marta; Dot-Bach, Dolors; Valero-Politi, José; Pintó-Sala, Xavier; Candás-Estébanez, Beatriz
2017-01-01
High serum concentrations of small dense low-density lipoprotein cholesterol (sd-LDL-c) particles are associated with risk of cardiovascular disease (CVD). Their clinical application has been hindered as a consequence of the laborious current method used for their quantification. Optimize a simple and fast precipitation method to isolate sd-LDL particles and establish a reference interval in a Mediterranean population. Forty-five serum samples were collected, and sd-LDL particles were isolated using a modified heparin-Mg 2+ precipitation method. sd-LDL-c concentration was calculated by subtracting high-density lipoprotein cholesterol (HDL-c) from the total cholesterol measured in the supernatant. This method was compared with the reference method (ultracentrifugation). Reference values were estimated according to the Clinical and Laboratory Standards Institute and The International Federation of Clinical Chemistry and Laboratory Medicine recommendations. sd-LDL-c concentration was measured in serums from 79 subjects with no lipid metabolism abnormalities. The Passing-Bablok regression equation is y = 1.52 (0.72 to 1.73) + 0.07 x (-0.1 to 0.13), demonstrating no significant statistical differences between the modified precipitation method and the ultracentrifugation reference method. Similarly, no differences were detected when considering only sd-LDL-c from dyslipidemic patients, since the modifications added to the precipitation method facilitated the proper sedimentation of triglycerides and other lipoproteins. The reference interval for sd-LDL-c concentration estimated in a Mediterranean population was 0.04-0.47 mmol/L. An optimization of the heparin-Mg 2+ precipitation method for sd-LDL particle isolation was performed, and reference intervals were established in a Spanish Mediterranean population. Measured values were equivalent to those obtained with the reference method, assuring its clinical application when tested in both normolipidemic and dyslipidemic subjects.
Innermost stable circular orbit of spinning particle in charged spinning black hole background
NASA Astrophysics Data System (ADS)
Zhang, Yu-Peng; Wei, Shao-Wen; Guo, Wen-Di; Sui, Tao-Tao; Liu, Yu-Xiao
2018-04-01
In this paper we investigate the innermost stable circular orbit (ISCO) (spin-aligned or anti-aligned orbit) for a classical spinning test particle with the pole-dipole approximation in the background of Kerr-Newman black hole in the equatorial plane. It is shown that the orbit of the spinning particle is related to the spin of the test particle. The motion of the spinning test particle will be superluminal if its spin is too large. We give an additional condition by considering the superluminal constraint for the ISCO in the black hole backgrounds. We obtain numerically the relations between the ISCO and the properties of the black holes and the test particle. It is found that the radius of the ISCO for a spinning test particle is smaller than that of a nonspinning test particle in the black hole backgrounds.
Mono or polycrystalline alumina-modified hybrid ceramics.
Kaizer, Marina R; Gonçalves, Ana Paula R; Soares, Priscilla B F; Zhang, Yu; Cesar, Paulo F; Cava, Sergio S; Moraes, Rafael R
2016-03-01
This study evaluated the effect of addition of alumina particles (polycrystalline or monocrystalline), with or without silica coating, on the optical and mechanical properties of a porcelain. Groups tested were: control (C), polycrystalline alumina (PA), polycrystalline alumina-silica (PAS), monocrystalline alumina (MA), monocrystalline alumina-silica (MAS). Polycrystalline alumina powder was synthesized using a polymeric precursor method; a commercially available monocrystalline alumina powder (sapphire) was acquired. Silica coating was obtained by immersing alumina powders in a tetraethylorthosilicate solution, followed by heat-treatment. Electrostatic stable suspension method was used to ensure homogenous dispersion of the alumina particles within the porcelain powder. The ceramic specimens were obtained by heat-pressing. Microstructure, translucency parameter, contrast ratio, opalescence index, porosity, biaxial flexural strength, roughness, and elastic constants were characterized. A better interaction between glass matrix and silica coated crystalline particles is suggested in some analyses, yet further investigation is needed to confirm it. The materials did not present significant differences in biaxial flexural strength, due to the presence of higher porosity in the groups with alumina addition. Elastic modulus was higher for MA and MAS groups. Also, these were the groups with optical qualities and roughness closer to control. The PA and PAS groups were considerably more opaque as well as rougher. Porcelains with addition of monocrystalline particles presented superior esthetic qualities compared to those with polycrystalline particles. In order to eliminate the porosity in the ceramic materials investigated herein, processing parameters need to be optimized as well as different glass frites should be tested. Copyright © 2015 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Ershen; Jia, Chaoying; Tong, Gang; Qu, Pingping; Lan, Xiaoyu; Pang, Tao
2018-03-01
The receiver autonomous integrity monitoring (RAIM) is one of the most important parts in an avionic navigation system. Two problems need to be addressed to improve this system, namely, the degeneracy phenomenon and lack of samples for the standard particle filter (PF). However, the number of samples cannot adequately express the real distribution of the probability density function (i.e., sample impoverishment). This study presents a GPS receiver autonomous integrity monitoring (RAIM) method based on a chaos particle swarm optimization particle filter (CPSO-PF) algorithm with a log likelihood ratio. The chaos sequence generates a set of chaotic variables, which are mapped to the interval of optimization variables to improve particle quality. This chaos perturbation overcomes the potential for the search to become trapped in a local optimum in the particle swarm optimization (PSO) algorithm. Test statistics are configured based on a likelihood ratio, and satellite fault detection is then conducted by checking the consistency between the state estimate of the main PF and those of the auxiliary PFs. Based on GPS data, the experimental results demonstrate that the proposed algorithm can effectively detect and isolate satellite faults under conditions of non-Gaussian measurement noise. Moreover, the performance of the proposed novel method is better than that of RAIM based on the PF or PSO-PF algorithm.
Synthesis of Nanosilver Particles in the Texture of Bank Notes to Produce Antibacterial Effect
NASA Astrophysics Data System (ADS)
Lari, Mohammad Hossein Asadi; Esmaili, Vahid; Naghavi, Seyed Mohammad Ebrahim; Kimiaghalam, Amir Hossein; Sharifaskari, Emadaldin
Silver particles show antibacterial and antiseptic properties at the nanoscale. Such properties result from an alteration in the binding capacity of silver atoms in bits of less than 6.5nm which enables them to kill harmful organisms. Silver nanoparticles are now the most broadly used agents in the area of nanotechnology after carbon nanotubes. Given that currency bills are one of the major sources of bacterial disseminations and their contamination has recently been nominated as a critical factor in gastrointestinal infections and possibly colon cancers, here we propose a new method for producing antibacterial bank notes by using silver nanoparticles. Older bank notes are sprayed with acetone to clean the surface. The bank note is put into a petri-dish containing a solution of silver nitrate and ammonia so that it is impregnated. The bank notes are then reduced with the formaldehyde gas, which penetrates its texture and produces silver nanoparticles in the cellulose matrix. The side products of the reactions are quickly dried off and the procedure ends with the drying of the bank note. The transmission electron microscope (TEM) images confirmed the nanoscale size range for the formed particles while spectroscopy methods, such as XRD, provided proof for the metallic nature of the particles. Bacterial challenge tests then showed that no colonies of the three tested bacterium (Escherichia coli, Staphylococcus aureus and Pseudomonas aeruginosa survived on the sample after a 72h incubation period. This study has provided a method for synthesizing silver NPs directly into the texture of fabrics and textiles (like that of bank notes) which can result in lower production costs, making the use of silver NPs economically beneficial. The method, specifically works on the fabric of bank notes, suggesting a method to tackle the transmission of bacteria through bank notes. Moreover, this study is a testament to the strong antibacterial nature of even low concentrations of silver NPs.
NASA Astrophysics Data System (ADS)
Hernandez, F.; Liang, X.
2017-12-01
Reliable real-time hydrological forecasting, to predict important phenomena such as floods, is invaluable to the society. However, modern high-resolution distributed models have faced challenges when dealing with uncertainties that are caused by the large number of parameters and initial state estimations involved. Therefore, to rely on these high-resolution models for critical real-time forecast applications, considerable improvements on the parameter and initial state estimation techniques must be made. In this work we present a unified data assimilation algorithm called Optimized PareTo Inverse Modeling through Inverse STochastic Search (OPTIMISTS) to deal with the challenge of having robust flood forecasting for high-resolution distributed models. This new algorithm combines the advantages of particle filters and variational methods in a unique way to overcome their individual weaknesses. The analysis of candidate particles compares model results with observations in a flexible time frame, and a multi-objective approach is proposed which attempts to simultaneously minimize differences with the observations and departures from the background states by using both Bayesian sampling and non-convex evolutionary optimization. Moreover, the resulting Pareto front is given a probabilistic interpretation through kernel density estimation to create a non-Gaussian distribution of the states. OPTIMISTS was tested on a low-resolution distributed land surface model using VIC (Variable Infiltration Capacity) and on a high-resolution distributed hydrological model using the DHSVM (Distributed Hydrology Soil Vegetation Model). In the tests streamflow observations are assimilated. OPTIMISTS was also compared with a traditional particle filter and a variational method. Results show that our method can reliably produce adequate forecasts and that it is able to outperform those resulting from assimilating the observations using a particle filter or an evolutionary 4D variational method alone. In addition, our method is shown to be efficient in tackling high-resolution applications with robust results.
Impact design methods for ceramic components in gas turbine engines
NASA Technical Reports Server (NTRS)
Song, J.; Cuccio, J.; Kington, H.
1991-01-01
Methods currently under development to design ceramic turbine components with improved impact resistance are presented. Two different modes of impact damage are identified and characterized, i.e., structural damage and local damage. The entire computation is incorporated into the EPIC computer code. Model capability is demonstrated by simulating instrumented plate impact and particle impact tests.
The chaotic dynamical aperture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S.Y.; Tepikian, S.
1985-10-01
Nonlinear magnetic forces become more important for particles in the modern large accelerators. These nonlinear elements are introduced either intentionally to control beam dynamics or by uncontrollable random errors. Equations of motion in the nonlinear Hamiltonian are usually non-integrable. Because of the nonlinear part of the Hamiltonian, the tune diagram of accelerators is a jungle. Nonlinear magnet multipoles are important in keeping the accelerator operation point in the safe quarter of the hostile jungle of resonant tunes. Indeed, all the modern accelerator design have taken advantages of nonlinear mechanics. On the other hand, the effect of the uncontrollable random multipolesmore » should be evaluated carefully. A powerful method of studying the effect of these nonlinear multipoles is using a particle tracking calculation, where a group of test particles are tracing through these magnetic multipoles in the accelerator hundreds to millions of turns in order to test the dynamical aperture of the machine. These methods are extremely useful in the design of a large accelerator such as SSC, LEP, HERA and RHIC. These calculations unfortunately take tremendous amount of computing time. In this paper, we try to apply the existing method in the nonlinear dynamics to study the possible alternative solution. When the Hamiltonian motion becomes chaotic, the tune of the machine becomes undefined. The aperture related to the chaotic orbit can be identified as chaotic dynamical aperture. We review the method of determining chaotic orbit and apply the method to nonlinear problems in accelerator physics. We then discuss the scaling properties and effect of random sextupoles.« less
Hang, Tian; Chen, Hui-Jiuan; Wang, Ji; Lin, Di-An; Wu, Jiangming; Liu, Di; Cao, Yuhong; Yang, Chengduan; Liu, Chenglin; Xiao, Shuai; Gu, Meilin; Pan, Shuolin; Wu, Mei X; Xie, Xi
2018-05-04
Dispersion of hydrophilic particles in non-polar media has many important applications yet remains difficult. Surfactant or amphiphilic functionalization was conventionally applied to disperse particles but is highly dependent on the particle/solvent system and may induce unfavorable effects and impact particle hydrophilic nature. Recently 2 μm size polystyrene microbeads coated with ZnO nanospikes have been reported to display anomalous dispersity in phobic media without using surfactant or amphiphilic functionalization. However, due to the lack of understanding whether this phenomenon was applicable to a wider range of conditions, little application has been derived from it. Here the anomalous dispersity phenomenons of hydrophilic microparticles covered with nanospikes were systematically assessed at various conditions including different particle sizes, material compositions, particle morphologies, solvent hydrophobicities, and surface polar groups. Microparticles were functionalized with nanospikes through hydrothermal route, followed by dispersity test in hydrophobic media. The results suggest nanospikes consistently prevent particle aggregation in various particle or solvent conditions, indicating the universal applicability of the anomalous dispersion phenomenons. This work provides insight on the anomalous dispersity of hydrophilic particles in various systems and offers potential application to use this method for surfactant-free dispersions.
NASA Astrophysics Data System (ADS)
Hang, Tian; Chen, Hui-Jiuan; Wang, Ji; Lin, Di-an; Wu, Jiangming; Liu, Di; Cao, Yuhong; Yang, Chengduan; Liu, Chenglin; Xiao, Shuai; Gu, Meilin; Pan, Shuolin; Wu, Mei X.; Xie, Xi
2018-05-01
Dispersion of hydrophilic particles in non-polar media has many important applications yet remains difficult. Surfactant or amphiphilic functionalization was conventionally applied to disperse particles but is highly dependent on the particle/solvent system and may induce unfavorable effects and impact particle hydrophilic nature. Recently 2 μm size polystyrene microbeads coated with ZnO nanospikes have been reported to display anomalous dispersity in phobic media without using surfactant or amphiphilic functionalization. However, due to the lack of understanding whether this phenomenon was applicable to a wider range of conditions, little application has been derived from it. Here the anomalous dispersity phenomenons of hydrophilic microparticles covered with nanospikes were systematically assessed at various conditions including different particle sizes, material compositions, particle morphologies, solvent hydrophobicities, and surface polar groups. Microparticles were functionalized with nanospikes through hydrothermal route, followed by dispersity test in hydrophobic media. The results suggest nanospikes consistently prevent particle aggregation in various particle or solvent conditions, indicating the universal applicability of the anomalous dispersion phenomenons. This work provides insight on the anomalous dispersity of hydrophilic particles in various systems and offers potential application to use this method for surfactant-free dispersions.
Impact of pectin esterification on the antimicrobial activity of nisin-loaded pectin particles.
Krivorotova, Tatjana; Staneviciene, Ramune; Luksa, Juliana; Serviene, Elena; Sereikaite, Jolanta
2017-01-01
The relationship between pectin structure and the antimicrobial activity of nisin-loaded pectin particles was examined. The antimicrobial activity of five different nisin-loaded pectin particles, i.e., nisin-loaded high methoxyl pectin, low methoxyl pectin, pectic acid, dodecyl pectin with 5.4 and 25% degree of substitution were tested in the pH range of 4.0-7.0 by agar-diffusion assay and agar plate count methods. It was found that the degree of esterification of carboxyl group of galacturonic acid in pectin molecule is important for the antimicrobial activity of nisin-loaded pectin particles. Nisin-loaded particles prepared using pectic acid or the pectin with low degree of esterification exhibit higher antimicrobial activity than nisin-loaded high methoxyl pectin particles. Pectins with free carboxyl groups or of low degree of esterification are the most suitable for particles preparation. Moreover, nisin-loaded pectin particles were active at close to neutral or neutral pH values. Therefore, they could be effectively applied for food preservation. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 33:245-251, 2017. © 2016 American Institute of Chemical Engineers.
Fabrication of Semiconducting Methylammonium Lead Halide Perovskite Particles by Spray Technology
NASA Astrophysics Data System (ADS)
Ahmadian-Yazdi, Mohammad-Reza; Eslamian, Morteza
2018-01-01
In this "nano idea" paper, three concepts for the preparation of methylammonium lead halide perovskite particles are proposed, discussed, and tested. The first idea is based on the wet chemistry preparation of the perovskite particles, through the addition of the perovskite precursor solution to an anti-solvent to facilitate the precipitation of the perovskite particles in the solution. The second idea is based on the milling of a blend of the perovskite precursors in the dry form, in order to allow for the conversion of the precursors to the perovskite particles. The third idea is based on the atomization of the perovskite solution by a spray nozzle, introducing the spray droplets into a hot wall reactor, so as to prepare perovskite particles, using the droplet-to-particle spray approach (spray pyrolysis). Preliminary results show that the spray technology is the most successful method for the preparation of impurity-free perovskite particles and perovskite paste to deposit perovskite thin films. As a proof of concept, a perovskite solar cell with the paste prepared by the sprayed perovskite powder was successfully fabricated.
Fabrication of Semiconducting Methylammonium Lead Halide Perovskite Particles by Spray Technology.
Ahmadian-Yazdi, Mohammad-Reza; Eslamian, Morteza
2018-01-10
In this "nano idea" paper, three concepts for the preparation of methylammonium lead halide perovskite particles are proposed, discussed, and tested. The first idea is based on the wet chemistry preparation of the perovskite particles, through the addition of the perovskite precursor solution to an anti-solvent to facilitate the precipitation of the perovskite particles in the solution. The second idea is based on the milling of a blend of the perovskite precursors in the dry form, in order to allow for the conversion of the precursors to the perovskite particles. The third idea is based on the atomization of the perovskite solution by a spray nozzle, introducing the spray droplets into a hot wall reactor, so as to prepare perovskite particles, using the droplet-to-particle spray approach (spray pyrolysis). Preliminary results show that the spray technology is the most successful method for the preparation of impurity-free perovskite particles and perovskite paste to deposit perovskite thin films. As a proof of concept, a perovskite solar cell with the paste prepared by the sprayed perovskite powder was successfully fabricated.
A hybrid method with deviational particles for spatial inhomogeneous plasma
NASA Astrophysics Data System (ADS)
Yan, Bokai
2016-03-01
In this work we propose a Hybrid method with Deviational Particles (HDP) for a plasma modeled by the inhomogeneous Vlasov-Poisson-Landau system. We split the distribution into a Maxwellian part evolved by a grid based fluid solver and a deviation part simulated by numerical particles. These particles, named deviational particles, could be both positive and negative. We combine the Monte Carlo method proposed in [31], a Particle in Cell method and a Macro-Micro decomposition method [3] to design an efficient hybrid method. Furthermore, coarse particles are employed to accelerate the simulation. A particle resampling technique on both deviational particles and coarse particles is also investigated and improved. This method is applicable in all regimes and significantly more efficient compared to a PIC-DSMC method near the fluid regime.
Results of a low power ice protection system test and a new method of imaging data analysis
NASA Technical Reports Server (NTRS)
Shin, Jaiwon; Bond, Thomas H.; Mesander, Geert A.
1992-01-01
Tests were conducted on a BF Goodrich De-Icing System's Pneumatic Impulse Ice Protection (PIIP) system in the NASA Lewis Icing Research Tunnel (IRT). Characterization studies were done on shed ice particle size by changing the input pressure and cycling time of the PIIP de-icer. The shed ice particle size was quantified using a newly developed image software package. The tests were conducted on a 1.83 m (6 ft) span, 0.53 m (221 in) chord NACA 0012 airfoil operated at a 4 degree angle of attack. The IRT test conditions were a -6.7 C (20 F) glaze ice, and a -20 C (-4 F) rime ice. The ice shedding events were recorded with a high speed video system. A detailed description of the image processing package and the results generated from this analytical tool are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.
Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope ofmore » a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.« less
Preservative-free triamcinolone acetonide suspension developed for intravitreal injection.
Bitter, Christoph; Suter, Katja; Figueiredo, Verena; Pruente, Christian; Hatz, Katja; Surber, Christian
2008-02-01
All commercially available triamcinolone acetonide (TACA) suspensions, used for intravitreal treatment, contain retinal toxic vehicles (e.g., benzyl alcohol, solubilizer). Our aim was to find a convenient and reproducible method to compound a completely preservative-free TACA suspension, adapted to the intraocular physiology, with consistent quality (i.e., proven sterility and stability, constant content and dose uniformity, defined particle size, and 1 year shelf life). We evaluated two published (Membrane-filter, Centrifugation) and a newly developed method (Direct Suspending) to compound TACA suspensions for intravitreal injection. Parameters as TACA content (HPLC), particle size (microscopy and laser spectrometry), sterility, and bacterial endotoxins were assessed. Stability testing (at room temperature and 40 degrees C) was performed: color and homogeneity (visually), particle size (microscopically), TACA content and dose uniformity (HPLC) were analyzed according to International Conference on Harmonisation guidelines. Contrary to the known methods, the direct suspending method is convenient, provides a TACA suspension, which fulfills all compendial requirements, and has a 2-year shelf life. We developed a simple, reproducible method to compound stable, completely preservative-free TACA suspensions with a reasonable shelf-life, which enables to study the effect of intravitreal TACA--not biased by varying doses and toxic compounds or their residues.
Solid Hydrogen Experiments for Atomic Propellants: Particle Formation Energy and Imaging Analyses
NASA Technical Reports Server (NTRS)
Palaszewski, Bryan
2002-01-01
This paper presents particle formation energy balances and detailed analyses of the images from experiments that were conducted on the formation of solid hydrogen particles in liquid helium during the Phase II testing in 2001. Solid particles of hydrogen were frozen in liquid helium and observed with a video camera. The solid hydrogen particle sizes and the total mass of hydrogen particles were estimated. The particle formation efficiency is also estimated. Particle sizes from the Phase I testing in 1999 and the Phase II testing in 2001 were similar. Though the 2001 testing created similar particles sizes, many new particle formation phenomena were observed. These experiment image analyses are one of the first steps toward visually characterizing these particles and it allows designers to understand what issues must be addressed in atomic propellant feed system designs for future aerospace vehicles.
NASA Astrophysics Data System (ADS)
Tolba, Khaled Ibrahim; Morgenthal, Guido
2018-01-01
This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.
Controlling the net charge on a nanoparticle optically levitated in vacuum
NASA Astrophysics Data System (ADS)
Frimmer, Martin; Luszcz, Karol; Ferreiro, Sandra; Jain, Vijay; Hebestreit, Erik; Novotny, Lukas
2017-06-01
Optically levitated nanoparticles in vacuum are a promising model system to test physics beyond our current understanding of quantum mechanics. Such experimental tests require extreme control over the dephasing of the levitated particle's motion. If the nanoparticle carries a finite net charge, it experiences a random Coulomb force due to fluctuating electric fields. This dephasing mechanism can be fully excluded by discharging the levitated particle. Here, we present a simple and reliable technique to control the charge on an optically levitated nanoparticle in vacuum. Our method is based on the generation of charges in an electric discharge and does not require additional optics or mechanics close to the optical trap.
Ground truth methods for optical cross-section modeling of biological aerosols
NASA Astrophysics Data System (ADS)
Kalter, J.; Thrush, E.; Santarpia, J.; Chaudhry, Z.; Gilberry, J.; Brown, D. M.; Brown, A.; Carter, C. C.
2011-05-01
Light detection and ranging (LIDAR) systems have demonstrated some capability to meet the needs of a fastresponse standoff biological detection method for simulants in open air conditions. These systems are designed to exploit various cloud signatures, such as differential elastic backscatter, fluorescence, and depolarization in order to detect biological warfare agents (BWAs). However, because the release of BWAs in open air is forbidden, methods must be developed to predict candidate system performance against real agents. In support of such efforts, the Johns Hopkins University Applied Physics Lab (JHU/APL) has developed a modeling approach to predict the optical properties of agent materials from relatively simple, Biosafety Level 3-compatible bench top measurements. JHU/APL has fielded new ground truth instruments (in addition to standard particle sizers, such as the Aerodynamic particle sizer (APS) or GRIMM aerosol monitor (GRIMM)) to more thoroughly characterize the simulant aerosols released in recent field tests at Dugway Proving Ground (DPG). These instruments include the Scanning Mobility Particle Sizer (SMPS), the Ultraviolet Aerodynamic Particle Sizer (UVAPS), and the Aspect Aerosol Size and Shape Analyser (Aspect). The SMPS was employed as a means of measuring smallparticle concentrations for more accurate Mie scattering simulations; the UVAPS, which measures size-resolved fluorescence intensity, was employed as a path toward fluorescence cross section modeling; and the Aspect, which measures particle shape, was employed as a path towards depolarization modeling.
Charged particle tracking at Titan, and further applications
NASA Astrophysics Data System (ADS)
Bebesi, Zsofia; Erdos, Geza; Szego, Karoly
2016-04-01
We use the CAPS ion data of Cassini to investigate the dynamics and origin of Titan's atmospheric ions. We developed a 4th order Runge-Kutta method to calculate particle trajectories in a time reversed scenario. The test particle magnetic field environment imitates the curved magnetic environment in the vicinity of Titan. The minimum variance directions along the S/C trajectory have been calculated for all available Titan flybys, and we assumed a homogeneous field that is perpendicular to the minimum variance direction. Using this method the magnetic field lines have been calculated along the flyby orbits so we could select those observational intervals when Cassini and the upper atmosphere of Titan were magnetically connected. We have also taken the Kronian magnetodisc into consideration, and used different upstream magnetic field approximations depending on whether Titan was located inside of the magnetodisc current sheet, or in the lobe regions. We also discuss the code's applicability to comets.
An Eulerian/Lagrangian coupling procedure for three-dimensional vortical flows
NASA Technical Reports Server (NTRS)
Felici, Helene M.; Drela, Mark
1993-01-01
A coupled Eulerian/Lagrangian method is presented for the reduction of numerical diffusion observed in solutions of 3D vortical flows using standard Eulerian finite-volume time-marching procedures. A Lagrangian particle tracking method, added to the Eulerian time-marching procedure, provides a correction of the Eulerian solution. In turn, the Eulerian solution is used to integrate the Lagrangian state-vector along the particles trajectories. While the Eulerian solution ensures the conservation of mass and sets the pressure field, the particle markers describe accurately the convection properties and enhance the vorticity and entropy capturing capabilities of the Eulerian solver. The Eulerian/Lagrangian coupling strategies are discussed and the combined scheme is tested on a constant stagnation pressure flow in a 90 deg bend and on a swirling pipe flow. As the numerical diffusion is reduced when using the Lagrangian correction, a vorticity gradient augmentation is identified as a basic problem of this inviscid calculation.
Evaluation of a conditioning method to improve core-veneer bond strength of zirconia restorations.
Teng, Jili; Wang, Hang; Liao, Yunmao; Liang, Xing
2012-06-01
The high strength and fracture toughness of zirconia have supported its extensive application in esthetic dentistry. However, the fracturing of veneering porcelains remains one of the primary causes of failure. The purpose of this study was to evaluate, with shear bond strength testing, the effect of a simple and novel surface conditioning method on the core-veneer bond strength of a zirconia ceramic system. The shear bond strength of a zirconia core ceramic to the corresponding veneering porcelain was tested by the Schmitz-Schulmeyer method. Thirty zirconia core specimens (10 × 5 × 5 mm) were layered with a veneering porcelain (5 × 3 × 3 mm). Three different surface conditioning methods were evaluated: polishing with up to 1200 grit silicon carbide paper under water cooling, airborne-particle abrasion with 110 μm alumina particles, and modification with zirconia powder coating before sintering. A metal ceramic system was used as a control group. All specimens were subjected to shear force in a universal testing machine at a crosshead speed of 0.5 mm/min. The shear bond strength values were analyzed with 1-way ANOVA and Tukey's post hoc pairwise comparisons (α=.05). The fractured specimens were examined with a scanning electron microscope to observe the failure mode. The mean (SD) shear bond strength values in MPa were 47.02 (6.4) for modified zirconia, 36.66 (8.6) for polished zirconia, 39.14 (6.5) for airborne-particle-abraded zirconia, and 46.12 (7.1) for the control group. The mean bond strength of the control (P=.028) and modified zirconia groups (P=.014) was significantly higher than that of the polished zirconia group. The airborne-particle-abraded group was not significantly different from any other group. Scanning electron microscopy evaluation showed that cohesive fracture in the veneering porcelain was the predominant failure mode of modified zirconia, while the other groups principally fractured at the interface. Modifying the zirconia surface with powder coating could significantly increase the shear bond strength of zirconia to veneering porcelain. Copyright © 2012 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lamberty, Andrée; Franks, Katrin; Braun, Adelina; Kestens, Vikram; Roebben, Gert; Linsinger, Thomas P. J.
2011-12-01
The Institute for Reference Materials and Measurements has organised an interlaboratory comparison (ILC) to allow the participating laboratories to demonstrate their proficiency in particle size and zeta potential measurements on monomodal aqueous suspensions of silica nanoparticles in the 10-100 nm size range. The main goal of this ILC was to identify competent collaborators for the production of certified nanoparticle reference materials. 38 laboratories from four different continents participated in the ILC with different methods for particle sizing and determination of zeta potential. Most of the laboratories submitted particle size results obtained with centrifugal liquid sedimentation (CLS), dynamic light scattering (DLS) or electron microscopy (EM), or zeta potential values obtained via electrophoretic light scattering (ELS). The results of the laboratories were evaluated using method-specific z scores, calculated on the basis of consensus values from the ILC. For CLS (13 results) and EM (13 results), all reported values were within the ±2 | z| interval. For DLS, 25 of the 27 results reported were within the ±2 | z| interval, the two other results were within the ±3 | z| interval. The standard deviations of the corresponding laboratory mean values varied between 3.7 and 6.5%, which demonstrates satisfactory interlaboratory comparability of CLS, DLS and EM particle size values. From the received test reports, a large discrepancy was observed in terms of the laboratory's quality assurance systems, which are equally important for the selection of collaborators in reference material certification projects. Only a minority of the participating laboratories is aware of all the items that are mandatory in test reports compliant to ISO/IEC 17025 (ISO General requirements for the competence of testing and calibration laboratories. International Organisation for Standardization, Geneva, 2005b). The absence of measurement uncertainty values in the reports, for example, hindered the calculation of zeta scores.
Environmental solid particle effects on compressor cascade performance
NASA Technical Reports Server (NTRS)
Tabakoff, W.; Balan, C.
1982-01-01
The effect of suspended solid particles on the performance of the compressor cascade was investigated experimentally in a specially built cascade tunnel, using quartz sand particles. The cascades were made of NACA 65(10)10 airfoils. Three cascades were tested, one accelerating cascade and two diffusing cascades. The theoretical analysis assumes inviscid and incompressible two dimensional flow. The momentum exchange between the fluid and the particle is accounted for by the interphase force terms in the fluid momentum equation. The modified fluid phase momentum equations and the continuity equation are reduced to the conventional stream function vorticity formulation. The method treats the fluid phase in the Eulerian system and the particle phase in Lagrangian system. The experimental results indicate a small increase in the blade surface static pressures, while the theoretical results indicate a small decrease. The theoretical analysis, also predicts the loss in total pressure associated with the particulate flow through the cascade.
NASA Astrophysics Data System (ADS)
Barnes, Brian C.; Leiter, Kenneth W.; Becker, Richard; Knap, Jaroslaw; Brennan, John K.
2017-07-01
We describe the development, accuracy, and efficiency of an automation package for molecular simulation, the large-scale atomic/molecular massively parallel simulator (LAMMPS) integrated materials engine (LIME). Heuristics and algorithms employed for equation of state (EOS) calculation using a particle-based model of a molecular crystal, hexahydro-1,3,5-trinitro-s-triazine (RDX), are described in detail. The simulation method for the particle-based model is energy-conserving dissipative particle dynamics, but the techniques used in LIME are generally applicable to molecular dynamics simulations with a variety of particle-based models. The newly created tool set is tested through use of its EOS data in plate impact and Taylor anvil impact continuum simulations of solid RDX. The coarse-grain model results from LIME provide an approach to bridge the scales from atomistic simulations to continuum simulations.
NASA Astrophysics Data System (ADS)
Faroughi, S. A.; Huber, C.
2015-12-01
Crystal settling and bubbles migration in magmas have significant effects on the physical and chemical evolution of magmas. The rate of phase segregation is controlled by the force balance that governs the migration of particles suspended in the melt. The relative velocity of a single particle or bubble in a quiescent infinite fluid (melt) is well characterized; however, the interplay between particles or bubbles in suspensions and emulsions and its effect on their settling/rising velocity remains poorly quantified. We propose a theoretical model for the hindered velocity of non-Brownian emulsions of nondeformable droplets, and suspensions of spherical solid particles in the creeping flow regime. The model is based on three sets of hydrodynamic corrections: two on the drag coefficient experienced by each particle to account for both return flow and Smoluchowski effects and a correction on the mixture rheology to account for nonlocal interactions between particles. The model is then extended for mono-disperse non-spherical solid particles that are randomly oriented. The non-spherical particles are idealized as spheroids and characterized by their aspect ratio. The poly-disperse nature of natural suspensions is then taken into consideration by introducing an effective volume fraction of particles for each class of mono-disperse particles sizes. Our model is tested against new and published experimental data over a wide range of particle volume fraction and viscosity ratios between the constituents of dispersions. We find an excellent agreement between our model and experiments. We also show two significant applications for our model: (1) We demonstrate that hindered settling can increase mineral residence time by up to an order of magnitude in convecting magma chambers. (2) We provide a model to correct for particle interactions in the conventional hydrometer test to estimate the particle size distribution in soils. Our model offers a greatly improved agreement with the results obtained with direct measurement methods such as laser diffraction.
Detection of molecular interactions
Groves, John T [Berkeley, CA; Baksh, Michael M [Fremont, CA; Jaros, Michal [Brno, CH
2012-02-14
A method and assay are described for measuring the interaction between a ligand and an analyte. The assay can include a suspension of colloidal particles that are associated with a ligand of interest. The colloidal particles are maintained in the suspension at or near a phase transition state from a condensed phase to a dispersed phase. An analyte to be tested is then added to the suspension. If the analyte binds to the ligand, a phase change occurs to indicate that the binding was successful.
A survey of particle contamination in electronic devices
NASA Technical Reports Server (NTRS)
Adolphsen, J. W.; Kagdis, W. A.; Timmins, A. R.
1976-01-01
The experiences are given of a number of National Aeronautics and Space Administration (NASA) and Space and Missile System Organization (SAMSO) contractors with particle contamination, and the methods used for its prevention and detection, evaluates the bases for the different schemes, assesses their effectiveness, and identifies the problems associated with each. It recommends specific short-range tests or approaches appropriate to individual part-type categories and recommends that specific tasks be initiated to refine techniques and to resolve technical and application facets of promising solutions.
NASA Astrophysics Data System (ADS)
Hashimoto, M.; Nakajima, T.; Takenaka, H.; Higurashi, A.
2013-12-01
We develop a new satellite remote sensing algorithm to retrieve the properties of aerosol particles in the atmosphere. In late years, high resolution and multi-wavelength, and multiple-angle observation data have been obtained by grand-based spectral radiometers and imaging sensors on board the satellite. With this development, optimized multi-parameter remote sensing methods based on the Bayesian theory have become popularly used (Turchin and Nozik, 1969; Rodgers, 2000; Dubovik et al., 2000). Additionally, a direct use of radiation transfer calculation has been employed for non-linear remote sensing problems taking place of look up table methods supported by the progress of computing technology (Dubovik et al., 2011; Yoshida et al., 2011). We are developing a flexible multi-pixel and multi-parameter remote sensing algorithm for aerosol optical properties. In this algorithm, the inversion method is a combination of the MAP method (Maximum a posteriori method, Rodgers, 2000) and the Phillips-Twomey method (Phillips, 1962; Twomey, 1963) as a smoothing constraint for the state vector. Furthermore, we include a radiation transfer calculation code, Rstar (Nakajima and Tanaka, 1986, 1988), numerically solved each time in iteration for solution search. The Rstar-code has been directly used in the AERONET operational processing system (Dubovik and King, 2000). Retrieved parameters in our algorithm are aerosol optical properties, such as aerosol optical thickness (AOT) of fine mode, sea salt, and dust particles, a volume soot fraction in fine mode particles, and ground surface albedo of each observed wavelength. We simultaneously retrieve all the parameters that characterize pixels in each of horizontal sub-domains consisting the target area. Then we successively apply the retrieval method to all the sub-domains in the target area. We conducted numerical tests for the retrieval of aerosol properties and ground surface albedo for GOSAT/CAI imager data to test the algorithm for the land area. In this test, we simulated satellite-observed radiances for a sub-domain consisting of 5 by 5 pixels by the Rstar code assuming wavelengths of 380, 674, 870 and 1600 [nm], atmospheric condition of the US standard atmosphere, and the several aerosol and ground surface conditions. The result of the experiment showed that AOTs of fine mode and dust particles, soot fraction and ground surface albedo at the wavelength of 674 [nm] are retrieved within absolute value differences of 0.04, 0.01, 0.06 and 0.006 from the true value, respectively, for the case of dark surface, and also, for the case of blight surface, 0.06, 0.03, 0.04 and 0.10 from the true value, respectively. We will conduct more tests to study the information contents of parameters needed for aerosol and land surface remote sensing with different boundary conditions among sub-domains.
DEM code-based modeling of energy accumulation and release in structurally heterogeneous rock masses
NASA Astrophysics Data System (ADS)
Lavrikov, S. V.; Revuzhenko, A. F.
2015-10-01
Based on discrete element method, the authors model loading of a physical specimen to describe its capacity to accumulate and release elastic energy. The specimen is modeled as a packing of particles with viscoelastic coupling and friction. The external elastic boundary of the packing is represented by particles connected by elastic springs. The latter means introduction of an additional special potential of interaction between the boundary particles, that exercises effect even when there is no direct contact between the particles. On the whole, the model specimen represents an element of a medium capable of accumulation of deformation energy in the form of internal stresses. The data of the numerical modeling of the physical specimen compression and the laboratory testing results show good qualitative consistency.
Cai, Wenjia; Ye, Lin; Zhang, Li; Ren, Yuanhang; Yue, Bin; Chen, Xueying; He, Heyong
2014-01-01
A series of nickel-containing mesoporous silica samples (Ni-SiO2) with different nickel content (3.1%–13.2%) were synthesized by the evaporation-induced self-assembly method. Their catalytic activity was tested in carbon dioxide reforming of methane. The characterization results revealed that the catalysts, e.g., 6.7%Ni-SiO2, with highly dispersed small nickel particles, exhibited excellent catalytic activity and long-term stability. The metallic nickel particle size was significantly affected by the metal anchoring effect between metallic nickel particles and unreduced nickel ions in the silica matrix. A strong anchoring effect was suggested to account for the remaining of small Ni particle size and the improved catalytic performance. PMID:28788570
Procedural uncertainties of Proctor compaction tests applied on MSWI bottom ash.
Izquierdo, Maria; Querol, Xavier; Vazquez, Enric
2011-02-28
MSWI bottom ash is a well-graded highly compactable material that can be used as a road material in unbound pavements. Achieving the compactness assumed in the design of the pavement is of primary concern to ensure long term structural stability. Regulations on road construction in a number of EU countries rely on standard tests originally developed for natural aggregates, which may not be appropriate to accurately assess MSWI bottom ash. This study is intended to assist in consistently assessing MSWI bottom ash compaction by means of the Proctor method. This test is routinely applied to address unbound road materials and suggests two methods. Compaction parameters show a marked procedural dependency due to the particle morphology and weak particle strength of ash. Re-compacting a single batch sample to determine Proctor curves is a common practise that turns out to overvalue optimum moisture contents and maximum dry densities. This could result in wet-side compactions not meeting stiffness requirements. Inaccurate moisture content measurements during testing may also induce erroneous determinations of compaction parameters. The role of a number of physical properties of MSWI bottom ash in compaction is also investigated. Copyright © 2011 Elsevier B.V. All rights reserved.
Numerical simulation of failure behavior of granular debris flows based on flume model tests.
Zhou, Jian; Li, Ye-xun; Jia, Min-cai; Li, Cui-na
2013-01-01
In this study, the failure behaviors of debris flows were studied by flume model tests with artificial rainfall and numerical simulations (PFC(3D)). Model tests revealed that grain sizes distribution had profound effects on failure mode, and the failure in slope of medium sand started with cracks at crest and took the form of retrogressive toe sliding failure. With the increase of fine particles in soil, the failure mode of the slopes changed to fluidized flow. The discrete element method PFC(3D) can overcome the hypothesis of the traditional continuous medium mechanic and consider the simple characteristics of particle. Thus, a numerical simulations model considering liquid-solid coupled method has been developed to simulate the debris flow. Comparing the experimental results, the numerical simulation result indicated that the failure mode of the failure of medium sand slope was retrogressive toe sliding, and the failure of fine sand slope was fluidized sliding. The simulation result is consistent with the model test and theoretical analysis, and grain sizes distribution caused different failure behavior of granular debris flows. This research should be a guide to explore the theory of debris flow and to improve the prevention and reduction of debris flow.
In-Tank Elutriation Test Report And Independent Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burns, H. H.; Adamson, D. J.; Qureshi, Z. H.
2011-04-13
The Department of Energy (DOE) Office of Environmental Management (EM) funded Technology Development and Deployment (TDD) to solve technical problems associated with waste tank closure for sites such as Hanford Site and Savannah River Site (SRS). One of the tasks supported by this funding at Savannah River National Laboratory (SRNL) and Pacific Northwest Laboratory (PNNL) was In-Tank Elutriation. Elutriation is the process whereby physical separation occurs based on particle size and density. This report satisfies the first phase of Task WP_1.3.1.1 In-Tank Elutriation, which is to assess the feasibility of this method of separation in waste tanks at Hanford Sitemore » and SRS. This report includes an analysis of scoping tests performed in the Engineering Development Laboratory of SRNL, analysis of Hanford's inadvertent elutriation, the viability of separation methods such as elutriation and hydrocyclones and recommendations for a path forward. This report will demonstrate that the retrieval of Hanford salt waste tank S-112 very successfully decreased the tank's inventories of radionuclides. Analyses of samples collected from the tank showed that concentrations of the major radionuclides Cs-136 and Sr-90 were decreased by factors of 250 and 6 and their total curie tank inventories decreased by factors of 60,000 and 2000. The total tank curie loading decreased from 300,000 Ci to 55 Ci. The remaining heel was nearly all innocuous gibbsite, Al(OH){sub 3}. However, in the process of tank retrieval approximately 85% of the tank gibbsite was also removed. Significant amounts of money and processing time could be saved if more gibbsite could be left in tanks while still removing nearly all of the radionuclides. There were factors which helped to make the elutriation of Tank S-112 successful which would not necessarily be present in all salt tanks. 1. The gibbsite particles in the tank were surprisingly large, as much as 200 {micro}m. The gibbsite crystals had probably grown in size over a period of decades. 2. The radionuclides were apparently either in the form of soluble compounds, like cesium, or micrometer sized particles of actinide oxides or hydroxides. 3. After the initial tank retrieval the tank contained cobble which is not conducive to elutriation. Only after the tank contents were treated with thousands of gallons of 50 wt% caustic, were the solids converted to sand which is compatible with elutriation. Discussions between SRNL and PNNL resulted in plans to test elutriation in two phases; in Phase 1 particles would be separated by differences in settling velocity in an existing scaled tank with its associated hardware and in Phase 2 additional hardware, such as a hydrocyclone, would be added downstream to separate slow settling partciels from liquid. Phase 1 of in-tank elutriation was tested for Proof of Principle in theEngineering Development Laboratory of SRNL in a 41" diameter, 87 gallon tank. The tank had been previously used as a 1/22 scale model of Hanford Waste Tank AY-102. The objective of the testing was to determine which tank operating parameters achieved the best separation between fast- and slow-settling particles. For Phase 1 testing a simulated waste tank supernatant, slow-settling particles and fast-settling particles were loaded to the scaled tank. Because this was a Proof of Principle test, readily available solids particles were used that represented fast-settling and slow-settling particles. The tank contents were agitated using rotating mixer jet pumps (MJP) which suspended solids while liquids and solids were drawn out of the tank with a suction tube. The goal was to determine the optimum hydraulic operating conditions to achieve clean separation in which the residual solids in the tank were nearly all fast-settling particles and the solids transferred out of the tank were nearly all slow-settling particles. Tests were conducted at different pump jet velocities, suction tube diameters and suction tube elevations. Testing revealed that the most important variable was jet velocity which translates to a downstream fluid velocity in the vicinity of the suction tube which can suspend particles and potentially allow their removal from the tank. The optimum jet velocity in the vicinity of the sucti9on tube was between 1.5 and 2 ft/s (4-5 gpm). During testing at lower velocities a significant amount of slow-settling particles remained in the tank. At higher velocities a significant amount of fast-settling particles were elutriated from the tank. It should be noted that this range of velocities is appropriate for this particular geometry and particles. However, the principle of In-Tank Elutriation was proved. In-tank elutriation has the potential to save much money in tank closure. However, more work, both analytical and experimental, must be done before an improved version of the process could be applied to actual waste tanks. It is recommended that testing with more prototypic simulants be conducted. Also, scale-up criteria for elutriation and the resulting size of pilot scale test equipment require investigation during future research. In addition, it is recommended that the use of hydrocyclones be pursued in Phase 2 testing. Hydrocyclones are a precise and efficient separation tool that are frequently used in industry.« less
Engel, A; Plöger, M; Mulac, D; Langer, K
2014-01-30
Nanoparticles composed of poly(DL-lactide-co-glycolide) (PLGA) represent promising colloidal drug carriers for improved drug targeting. Although most research activities are focused on intravenous application of these carriers the peroral administration is described to improve bioavailability of poorly soluble drugs. Based on these insights the manuscript describes a model tablet formulation for PLGA-nanoparticles and especially its analytical characterisation with regard to a nanosized drug carrier. Besides physico-chemical tablet characterisation according to pharmacopoeias the main goal of the study was the development of a suitable analytical method for the quantification of nanoparticle release from tablets. An analytical flow field-flow fractionation (AF4) method was established and validated which enables determination of nanoparticle content in solid dosage forms as well as quantification of particle release during dissolution testing. For particle detection a multi-angle light scattering (MALS) detector was coupled to the AF4-system. After dissolution testing, the presence of unaltered PLGA-nanoparticles was successfully proved by dynamic light scattering and scanning electron microscopy. Copyright © 2013 Elsevier B.V. All rights reserved.
Highlights of the high-temperature falling particle receiver project: 2012 - 2016
NASA Astrophysics Data System (ADS)
Ho, C. K.; Christian, J.; Yellowhair, J.; Jeter, S.; Golob, M.; Nguyen, C.; Repole, K.; Abdel-Khalik, S.; Siegel, N.; Al-Ansary, H.; El-Leathy, A.; Gobereit, B.
2017-06-01
A 1 MWt continuously recirculating falling particle receiver has been demonstrated at Sandia National Laboratories. Free-fall and obstructed-flow receiver designs were tested with particle mass flow rates of ˜1 - 7 kg/s and average irradiances up to 1,000 suns. Average particle outlet temperatures exceeded 700 °C for the free-fall tests and reached nearly 800 °C for the obstructed-flow tests, with peak particle temperatures exceeding 900 °C. High particle heating rates of ˜50 to 200 °C per meter of illuminated drop length were achieved for the free-fall tests with mass flow rates ranging from 1 - 7 kg/s and for average irradiances up to ˜ 700 kW/m2. Higher temperatures were achieved at the lower particle mass flow rates due to less shading. The obstructed-flow design yielded particle heating rates over 300 °C per meter of illuminated drop length for mass flow rates of 1 - 3 kg/s for irradiances up to ˜1,000 kW/m2. The thermal efficiency was determined to be ˜60 - 70% for the free-falling particle tests and up to ˜80% for the obstructed-flow tests. Challenges encountered during the tests include particle attrition and particle loss through the aperture, reduced particle mass flow rates at high temperatures due to slot aperture narrowing and increased friction, and deterioration of the obstructed-flow structures due to wear and oxidation. Computational models were validated using the test data and will be used in future studies to design receiver configurations that can increase the thermal efficiency.
A study of solid wall models for weakly compressible SPH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valizadeh, Alireza, E-mail: alireza.valizadeh@monash.edu; Monaghan, Joseph J., E-mail: joe.monaghan@monash.edu
2015-11-01
This paper is concerned with a comparison of two methods of treating solid wall boundaries in the weakly compressible (SPH) method. They have been chosen because of their wide use in simulations. These methods are the boundary force particles of Monaghan and Kajtar [24] and the use of layers of fixed boundary particles. The latter was first introduced by Morris et al. [26] but has since been improved by Adami et al. [1] whose algorithm involves interpolating the pressure and velocity from the actual fluid to the boundary particles. For each method, we study the effect of the density diffusivemore » terms proposed by Molteni and Colagrossi [19] and modified by Antuono et al. [3]. We test the methods by a series of simulations commencing with the time-dependent spin-down of fluid within a cylinder and the behaviour of fluid in a box subjected to constant acceleration at an angle to the walls of the box, and concluding with a dam break over a triangular obstacle. In the first two cases the results from the two methods can be compared to analytical solutions while, in the latter case, they can be compared with experiments and other methods. These results show that the method of Adami et al. together with density diffusion is in very satisfactory agreement with the experimental results and is, overall, the best of the methods discussed here.« less
NASA Astrophysics Data System (ADS)
Drera, Saleem S.; Hofman, Gerard L.; Kee, Robert J.; King, Jeffrey C.
2014-10-01
Low-enriched uranium (LEU) fuel plates for high power materials test reactors (MTR) are composed of nominally spherical uranium-molybdenum (U-Mo) particles within an aluminum matrix. Fresh U-Mo particles typically range between 10 and 100 μm in diameter, with particle volume fractions up to 50%. As the fuel ages, reaction-diffusion processes cause the formation and growth of interaction layers that surround the fuel particles. The growth rate depends upon the temperature and radiation environment. The cellular automaton algorithm described in this paper can synthesize realistic random fuel-particle structures and simulate the growth of the intermetallic interaction layers. Examples in the present paper pack approximately 1000 particles into three-dimensional rectangular fuel structures that are approximately 1 mm on each side. The computational approach is designed to yield synthetic microstructures consistent with images from actual fuel plates and is validated by comparison with empirical data on actual fuel plates.
Combined control of morphology and polymorph in spray drying of mannitol for dry powder inhalation
NASA Astrophysics Data System (ADS)
Lyu, Feng; Liu, Jing J.; Zhang, Yang; Wang, Xue Z.
2017-06-01
The morphology and polymorphism of mannitol particles were controlled during spray drying with the aim of improving the aerosolization properties of inhalable dry powders. The obtained microparticles were characterized using scanning electron microscopy, infrared spectroscopy, differential scanning calorimetry, powder X-ray diffraction and inhaler testing with a next generation impactor. Mannitol particles of varied α-mannitol content and surface roughness were prepared via spray drying by manipulating the concentration of NH4HCO3 in the feed solution. The bubbles produced by NH4HCO3 led to the formation of spheroid particles with a rough surface. Further, the fine particle fraction was increased by the rough surface of carriers and the high α-mannitol content. Inhalable dry powders with a 29.1 ± 2.4% fine particle fraction were obtained by spray-drying using 5% mannitol (w/v)/2% NH4HCO3 (w/v) as the feed solution, proving that this technique is an effective method to engineer particles for dry powder inhalation.